Filtering and Privacy: What Would You Do?

Welcome to this week’s Tip of the Hat!

You’re working the information desk at the local college library. A student comes up to you, personal laptop in tow. They say that they can’t access many of the library databases they need for a class assignment. You ask them to show you what errors they are getting on their laptop when trying to visit one of the databases. The student opens their laptop and shows you the browser window. You see what appears to be a company logo and a message – “Covenant Eyes has blocked http://search.ebscohost.com. This page was blocked due to your current filter configuration.”

What’s going on?

Online filtering is not an unfamiliar topic to libraries. Some libraries filter library computers to receive funds from the E-rate program under the Children’s Internet Protection Act [CIPA]. Other libraries do not filter for many reasons, including that filters deny the right to privacy for teens and young adults. The American Library Association published a report about CIPA and libraries, noting that over filtering resources blocks access to legitimate educational resources, among many other resources used for educational and research purposes.

We’re not dealing with a library computer in the scenario, though. An increasing number of libraries encounter filtering software on adult patrons’ personal computers. Sometimes these are college students using a laptop gifted by their parents. These computers come with online monitoring and filtering software, such as Covenant Eyes, for the parents to track and/or control the use of the computer by the student. Parents can set the filter to block certain sites as well as track what topics and sites the student is researching at the library. This monitoring of computer activity, including online activity, is in direct conflict with the patron’s right to privacy while using library resources, as well as the patron’s right to access library resources.

Going back to the opening scenario, what can the library do to help the patron maintain their privacy and access library resources? There are a few technical workarounds that the library and patron can explore. The EEF’s Surveillance Self-Defense Guide lists several ways to circumvent internet filtering or monitoring software. Depending on the comfort level of both library staff and patron, one workaround to explore is running the Tor browser from a USB drive, using the pluggable transports or bridges built into Tor as needed. This method allows the patron to use Tor without having to install the browser on the computer, which then would keep the monitoring software from keeping track of what sites the person is visiting. The other major workaround is to use a library computer or another computer, which while inconvenient for the patron, would be another way to protect the privacy of the patron while using library resources.

The above scenario is only one of many scenarios that libraries might face in working with patrons whose personal computers have tracking or filtering software. Tracking and filtering software on patron personal computers is a risk to patron privacy when patrons use those devices to use the library. It is a risk that the library can help mitigate through education and possible technical workarounds, nonetheless.

Now it’s your turn – how would your library handle the college student patron scenario described in the newsletter? Reply to this newsletter to share your library’s experiences with similar scenarios as well. LDH will de-identify the responses and share them in a future newsletter to help other libraries start formulating their procedures. You might also pick up a new procedure or two!

[Many thanks to our friends at the Library Freedom Project for the Tor information in today’s post!]

Silent Librarian and Tracking Report Cards

Welcome to this week’s Tip of the Hat! We at LDH survived the full moon on the Friday the 13th, though our Executive Assistant failed to bring donuts into the office to ward off bad luck. Unfortunately, several universities need more than luck against a widespread cyberattack that has a connection to libraries.

This attack, called Cobalt Dickens or Silent Librarian, relies on phishing to gain access to university systems. The potential victims receive a spoofed email from the library stating that their library account is expired, followed by instructions to click on a link to reactivate the account by entering their account information on a spoofed library website. With this attack happening at the beginning of many universities’ semesters, incoming students and faculty might click through without giving a second thought to the email.

We are used to having banking and other commercial sites be the subject of spoofing by attackers to obtain user credentials. Nonetheless, Silent Librarian reminds us that libraries are not exempt from being spoofed. Silent Librarian is also a good prompt to review incident response policies and procedures surrounding patron data leaks or breaches with your staff. Periodic reviews will help ensure that policies and procedures reflect the changing threats and risks with the changing technology environment. Reviews can also be a good time to review incident response materials and training for library staff, as well as reviewing cybersecurity basics. If a patron calls into the library about an email regarding their expired account, a trained staff member has a better chance in preventing that patron falling for the phishing email which then better protects library systems from being accessed by attackers.

We move from phishing to tracking with the release of a new public tool to assess privacy on library websites. The library directory on Marshall Breeding’s Library Technology Guides site is a valuable resource, listing thousands of libraries in the world. Each listing has basic library information, including information about the types of systems used by the library, including specific products such as the integrated library system, digital repository, and discovery layer. Each listing now includes a Privacy and Security Report Card that grades the main library website on the following factors:

  • HTTPS use
  • Redirection to an encrypted version of the web page
  • Use of Google Analytics, including if the site is instructing GA to anonymize data from the site
  • Use of Google Tag Manager, DoubleClick, and other trackers from Google
  • Use of Facebook trackers
  • Use of other third-party services and trackers, such as Crazy Egg and NewRelic

You can check what your library’s card looks like by clicking on the Privacy and Security Report button on the individual library page listing. In addition to individual statistics, you can view the aggregated statistics at https://bit.ly/ltg-https-report. The majority of public library websites are HTTPS, which is good news! The number of public libraries using Google Analytics to collect non-anonymized data, however, is not so good news. If you are one of those libraries, here are a couple of resources to help you get started in addressing this potential privacy risk for your patrons:

What’s The Name of Your Pet?

Welcome to this week’s Tip of the Hat!

Our Executive Assistant argues that we at LDH shouldn’t use her name to answer the question in today’s newsletter title. She is, after all, our Executive Assistant, and not a pet. However, the EA’s objection also has merit for information security reasons. Today we visit our information security neighbors to explore one risk to library staff and patron account privacy – the dreaded security question.

Where did you meet your best friend?
This topic was inspired by a recent popular tweet:

normal people: it’s my birthday

infosec experts: THAT WAS HIGHLY SENSITIVE INFORMATION. DO YOU HAVE ANY IDEA HOW EXPOSED YOU ARE

normal people: my dogs name is Jack

infosec experts: YOU’RE GONE. DONE FOR. IT’S OVER
— Katerina Borodina (@kathyra_) September 3, 2019

Common security questions can be easily cracked by a quick search of your online activity. Social media is a gold mine of this type of information, including information about pets, childhood, school, family, or even your favorite color and sports team. Some companies provide less common security questions that would prove harder to crack, though most companies do not stray from the common security questions.

Library staff are in a particular bind in a couple of situations involving security questions. Some vendor products require security questions for account creation, and some libraries are only allowed one institutional “admin” account to share among staff. We bet you a nice cup of quality tea that at least one of the security question answers for that account is a variation of the following words:

  • Checkout or check-in
  • Dewey
  • Books, including bookworm
  • Cat
  • Reading
  • Library
  • Your library’s, organization’s, or department’s name, physical location, mascot, school colors, etc.

Perhaps the person who created the account decided to use their own personal information to answer the questions, which doesn’t get changed when that staff person leaves the library. Resetting the account now becomes trickier, particularly if this staff personal information wasn’t documented. However, if that person posted some of the information on a public site, that staff account is now at a higher risk of being compromised by a threat actor, looking for a way to get into the system.

In either case, library staff accounts that require security questions provide unique security challenges that also carry some privacy risks for both staff and patrons.

What is your favorite color?

By now you’ve heard the advice to not post private information publicly from InfoSec. That doesn’t help much when you have a shared account for library staff. Ideally, you shouldn’t have shared accounts – application permissions and privileges should be granted to individual user accounts. These user-level permissions and privileges should change anytime there is a change in staff or staff responsibilities. Some vendors allow for such user permission granularity, and if your vendor doesn’t support that level of permission control, start asking them to do so!

There is also the fact that security questions themselves are inherently insecure as a way to keep user accounts secure; however, many companies still rely on these questions to authenticate users or for password resets. If you are creating a library staff account for a vendor product or service, and the vendor is requiring you to answer common security questions as part of the account creation process, a good place to start is to randomize your answers.

When we say “randomize” we do not mean swapping out your personal information for information about your workplace but provide an answer that would make no sense in answering the question. For example, “What was your first car?” could have the following answers:

  • A: Treehouse
    • A single word or a simple phrase that is not apparently related to you, the organization, or the question itself
  • A: ur0wIBHRGp9IBi
    • A random string of characters generated from a password generator
  • A: decimallemonBritish
    • A random passphrase generated from a passphrase generator

The more random you get with your answer, the better. To ensure that you are getting closer to a random answer, use a password or passphrase generator. Most password managers have random generators, and some even have the option to create passphrases. If you have multiple accounts that require security question answers, do not use the same answer twice; instead, generate new answers for each account, even if the account shares the same questions with other accounts.

Lastly, document the answers in a secure place. Many password managers have a secure notes function in which you can document your security answers for each account. Make sure that the place you store your answers is encrypted and accessible to only those who need access to those answers in the case that they need to reset the password or access the account. In most cases, that would mean only you, but if your department uses a password manager to manage department accounts, this would be the place to store them as well.

As long as companies require you to answer security questions, you need to mitigate the many risks that come with such questions. Randomizing answers is the first place to start, and not using personal information attached to any staff members or the workplace is another critical step. If all else fails, you can always change your pet’s name to 9AtTsCbWqRww7C…

Threat, Vulnerability, or Risk?

Welcome to this week’s Tip of the Hat!

“Animal, plant, or mineral?” Most folks can, with a healthy amount of confidence, say that something is one of those three, as well as explain the differences between the three categories. It’s also a fun game to keep younger kids occupied for your next long trip.

Today we are going to introduce a variation of the game for us adults – “Threat, vulnerability, or risk?” Information privacy and security use these three terms with assessing the protection of data and other organizational assets, as well as potential harms to those assets and the organization. Many people use these terms interchangeably in daily conversations surrounding Infosec and privacy. There are differences between the three, though! To understand what it means when someone says “threat” instead of “vulnerability”, we will go over some definitions to help you differentiate between the three terms:

A Threat is a potential scenario that can cause damage or loss to an organizational asset. You might have heard the term threat actor, which refers to a specific someone or something that could be responsible for creating said harm to the organization. Note well that you do not have to demonstrate malicious intent to be a threat actor. Sometimes threat actors do not act out of malicious intent but are still a threat due to them exploiting a vulnerability in the organization.

Vulnerability refers to the weakness in any system or structure that a threat can use to cause harm to the organization. People focus on technical vulnerabilities; however, the non-technical vulnerabilities, aka your fellow humans and organizational structures, are as important to identify as your technical vulnerabilities.

Risk is the potential of damage or loss resulting from a threat taking advantage of a vulnerability. Many use an equation to calculate the amount of risk of a particular scenario: Risk = Threat x Vulnerability x Cost, with Cost being the potential impact on the target by a threat.

Let’s explore these terms further with our library hat on:

What can be considered a threat?

  • Untrained/undertrained staff not following law enforcement request procedures
  • A staff member gains unauthorized access to sensitive systems or data, and modifies, exports, or deletes data to inflict harm or for their gain
  • A data breach of a vendor-hosted database

What can be considered a vulnerability?

  • Lack of access to regular privacy training and resources for staff
  • Lax or lack of system user access policies and procedures
  • Lack of or insufficient vendor privacy and security practices
    Improper collection and storage of sensitive data by systems

What are the possible types of risk in any given scenario?

  • Legal – possible legal action due to noncompliance with applicable local, state, federal, or international regulations surrounding particular types of data
  • Reputational – “The Court of Public Opinion”; loss of patron trust; loss of trust in the vendor
  • Operational – the inability to perform critical tasks and duties to ensure uninterrupted access to core services and resources

By knowing the differences between threat, vulnerability, and risk, you can better assess the scenarios that can put your organization at higher risk of legal, reputational, or operational harm. You can also proactively mitigate these risks by addressing the vulnerabilities that can be exploited by the threats you can identify. Take some time this week to walk through the “Threat, vulnerability, or risk?” game with your colleagues, and you might be surprised by what you will find about your organization.

Gone Phishin’

Welcome to this week’s Tip of the Hat!

Our Executive Assistant has been waiting for the opportunity to spend some of her summer days fishing at one of Seattle’s many fishing spots. LDH, unfortunately, cannot claim that fishing is a work-related activity; however, dealing with the different types of “phishing” activities do fall under the realm of keeping data private and safe.

Phishing, like fishing, is a complex process, most of which is done behind the scenes. The general goal of email phishing is to gain a piece of sensitive information or system access from the target. To achieve that goal, the phishing email needs to pull off certain steps, the first being to appear official. This doesn’t work very well if you have encountered a phishing email for a company that you don’t do business with, but an email that is designed to look exactly like an official email from a company that you do business with (or even work for) can lead to a false sense of security.

Phishing relies heavily on exploiting human traits and biases. Having an email look authentic is one way. Even if the email doesn’t look authentic, if it tells you that your account has been compromised, or if you have won an award, you might not think twice before acting on the email. For example, if someone claiming to be from your IT department asks for your password because they need to access your computer to perform critical security updates, your initial reaction is to be helpful and to provide the information. If a bank email told you that your account has been suspended, you might not be thinking about if the email was legitimate – you might be thinking about bills that are set up to auto-pay with the account, and that you need to make sure all those payments go through. You click on the link and become another fish caught by the phisher.

Avoiding phishing attempts involves several tactics. The best way of dealing with phishing emails is to never have them pop into your inbox in the first place. Junk and spam filters can do most of the work, along with specialized applications and software. When you do get an email from a company that you do business with, the best first step to take is to stop and think before acting on the email’s requests:

  • Check the links – Some phishing attempts will come from a domain name similar to the actual company, but something just doesn’t quite fit. For example, the link companyA.examplesite.com might make you think that it’s a legitimate Company A URL – in reality, the main site is examplesite.com.
  • Check the sender field – If you are getting an email claiming to be from Company A, but the sender’s email address is not from Company A, the email is most likely not from Company A.
  • Check the message – does the message include any of the following?
    • Misspellings, bad grammar, poor formatting?
    • Messages claiming that your account was suspended or compromised and that you need to download a file, click a link, or send your login credentials via email to resolve the issue?
    • Messages claiming that you won a prize or award and that you need to click on a link or send over information to claim the prize?
    • If the email writer who is requesting your login information claims to come from your organization or from IT?

If you go through the checks and are still not 100% sure if the email is legitimate, do not click on any links, download or open any attachments, or reply back to the email. Contact the company through other means – opening a browser tab and accessing the company website via bookmarked tab or typing in the main company URL (NOT from the email!) is a safer way to obtain contact information as well as logging into your account.

Phishing has gotten more elaborate throughout the years, finding new ways to exploit human characteristics. Spear phishing and whaling are just two of the ways phishing has evolved. Nonetheless, if we all stop and think before we act on that email telling us to send over our information to claim our free fishing trip, more phishers will end their phishing trips with no catches.

You Say Security, I Say Privacy…

Welcome to this week’s Tip of the Hat!

You might have seen the words “security” and “privacy” used interchangeably in articles, blog posts, and other areas of discussion surrounding protecting sensitive data. Sometimes that interchange of words further complicates already complex matters. A recent article by Steve Touw explores the confusion surrounding encryption and redaction methods in the CCPA. Touw breaks down encryption and redaction to their basic components which shows that each method ultimately lives in two different worlds: encryption in the security world, and redaction in the realm of privacy.

But aren’t privacy and security essentially the same thing, which is the means of protecting an asset (in our case, data)? While both arguably have the same goal in protecting a particular asset, privacy and security are different in the way in which they approach risk assessment and evaluation. In the scope of information management:

Security pertains to actions that protect organizational assets, including both personal and non-personal data.

Privacy pertains to the handling, controlling, sharing, and disposal of personal data.

Security and privacy do share key concepts and concerns, including appropriate use, confidentiality, and access to organizational assets (including personal data). Nonetheless, implementing security practices doesn’t necessarily guarantee privacy; a quote that makes the rounds in privacy professional groups is “You can have security without privacy, but you cannot have privacy without security.”

An example of the above quote comes from when you log into a system or application. Let’s use staff access to the integrated library system for this example. A login allows you to control which staff can access the ILS. Assigning individual logins to staff members and ensuring that only those logins can access the staff functions in the ILS is a security measure. This security measure protects patron data from being inappropriately accessed by other patrons, or others looking for that data. On that point of using security to protect privacy, so far, so good.

Once we get past the login, though, we come to a potential privacy issue. You have staff logins, which prevent unauthorized access to patron data by the public, but what about unauthorized access to patron data by your own staff? Not every staff member needs to have access to patron data in order to perform their daily duties. By leaving staff logins to have free reign over what they can access in the ILS database, you are at risk of violating patron privacy even though you have security measures in place to limit system access to staff members. To mitigate this risk, another security measure can be used – assigning who can access what through role or group level access controls. Most ILSes have a basic level of role-based access controls where systems administrators can assign the lowest level of access needed for each role, and applying these roles consistently will limit the instances of unauthorized access to data by staff.

All the security measures in the world, nonetheless, will not mitigate the risk of privacy harm to your patrons if your ILS is collecting highly sensitive data in the first place! These security measures don’t prevent you from collecting this type of data. This is where privacy policies and determining what data needs to be collected to meet operational needs come into play. If you don’t collect the data, the data cannot be breached or leaked.

It’s clear from this example that both privacy and security have parts to play in protecting patron privacy. Understanding these parts – where they overlap, and where they diverge – will help you through building and maintaining a robust set of data privacy and security practices throughout your organization.

Into the Breach!

Welcome to this week’s Tip of the Hat!

Last week brought word of two data leaks from two major library vendors, Elsevier and Kanopy. Elsevier’s leak involved a server storing user credentials, including passwords, that was not properly secured. Kanopy’s leak involved an unsecured database storing website logs, including user activity. Both leaks involved library patron information, and both leaks were caused by a lapse in security measures on the part of the vendor.

As the fallout from these two breaches continues in the library world, now is as good of a time than any to talk about data breaches in general. Data breaches are inevitable, even if you follow security and privacy best practices. What matters is what you do when you learn of a possible data breach at your library.

On a high level, your response to a possible data breach should look something like this:

  1. Determine if there was an actual breach – depending on the nature of the breach, this could be fairly easy (like a lost laptop with patron information) or requires more investigation (like looking at access logs to see if inactive accounts have sudden bursts of activity).
  2. Contain and analyze the breach – some breaches can be contained with recovering lost equipment, while others can be contained by shutting off access to the data source itself. Once the breach is contained, you can then investigate the “who, what, when, where, and how” of the breach. This information will be useful in the next steps…
  3. Notify affected parties – this does not only include individual users but organizational and government agencies as well.
  4. Follow up with actions to mitigate future data breaches – this one is self-explanatory with regard to applying what you learned from the breach.

The US does not have a comprehensive federal data breach notification law. What the US does have is 50+ data breach notification laws that vary from state to state. These laws have different regulations pertaining to who needs to be notified at a certain time, and what information should be included in the notification. If you are also a part of a larger organization, that organization might have a data breach incident response procedure. All of the above should be taken into consideration when building your own incident response procedure.

However, that does not address what many of you might be thinking in light of last week’s data breaches – how do you prevent having your patrons’ information breached in a vendor’s system? It’s frustrating when your library’s patron information is left unsecured with a vendor, be it through unencrypted passwords and open databases containing patron data. There are a couple of steps in mitigating risk with the vendor:

  • Vendor security audits – One practice is to audit the vendor’s data security policies and procedures. There are some library related examples that you can pull from: San Jose Public Library performed a vendor security audit in 2018, while Alex Caro and Chris Markman created an assessment framework in their article for the Code4Lib Journal.
  • Contract negotiations – Writing in privacy and security clauses into a vendor contract introduces a layer of legal protection not only for your patrons but to your organization as a whole, with regards to possible liability that comes with a data breach. Additions can clarify expectations about levels of security surrounding patron data in vendor systems as well as data breach management expectations and roles between the vendor and the library.

Ultimately, it’s up to the vendor if they want to follow security best practices and have a data breach incident management procedure (though, if a vendor chooses not to implement security protocols, that could adversely affect their business). Nonetheless, it never hurts to regularly bring up security and privacy in contract negotiations and renewals, procurement processes, and in regularly scheduled vendor rep meetings. Make it clear that your library considers security and privacy as priorities in serving patrons, and (hopefully) that will lead to a partnership that is beneficial to all involved and leaves patrons at a lower risk of having their data breached.

Phew! There’s a lot more on this topic that can be said, but we must leave it here for now. Below are a couple of resources that will help you in creating a data breach incident response plan:

#dataspringcleaning

Welcome to this week’s Tip of The Hat!

This week’s newsletter is inspired from last week’s #ChatOpenS Twitter chat about patron privacy, where the topic of #dataspringcleaning made its appearance.

I’m starting the hashtag #dataspringcleaning — I need to do this in my personal life, too! https://t.co/ueVfafKDQ0
— Equinox OLI (@EquinoxOLI) March 13, 2019

Springtime is around the corner, which means Spring Cleaning Time. While you are cleaning your physical spaces, take some time to declutter your data inventory. By getting rid of personally identifiable data that you no longer need, you are scrubbing some of the toxicity out of your data inventory, and lessening the privacy risks to patrons.

When you are done with data, what do you do with it? First, you need to check in to see if you are truly done with that data. Unfortunately, we cannot use Marie Kondo’s approach by asking if the data sparks joy, but here are some questions to ask instead:

  • Is the dataset no longer needed for operational purposes?
  • Are you done creating an aggregated dataset from the raw data?
  • Is the dataset past the record retention period set by policy or regulation? Don’t forget about backup copies as well!

Once you have determined that you no longer need the data, it’s time to clean up! For data on paper – surveys, signup or sign in sheets, reservation sheets – shred the paper and dispose of it through a company that securely disposes of shredded documents. Resist the temptation of throwing the shredding into the regular recycling bin – if your shredder shreds only in long strips, or otherwise doesn’t turn your documents into tiny bits of confetti, dumpster divers can piece together the shredded document.

Electronic data requires a bit more scrubbing. When you delete electronic data, the data is still there on the drive; you’ve just deleted the pointer to that file. Using software that can wipe the file or the entire drive will reduce the risk of someone finding the deleted file. There are free and paid software options to complete the task, depending on your system and your needs (hard drive, USB sticks, etc.).

And now we get to the fun part of deleting data. Any disc drives, CDs, floppy disks, or (where I give my age away) backup tape drives that held patron data need to be disposed of properly as well. Sometimes you are close to a disk disposal center where you can destroy your drives via degaussing machines. If you can’t find a center, then you have to literally take matters into your own hands. Remember that scene from Office Space with the printer?

A man beating a printer with a baseball bat.
That is what you are going to do, but with safety gear. Hammers, power drills, anything that will destroy the platters in the drive or the disk itself – just practice safety while doing so!

And who says that cleaning can’t be fun?

Resources to get you started:

There’s a Checklist For That!

Welcome to this week’s Tip of the Hat!

Last week was a busy week on both state and federal privacy regulation fronts, and it was a busy week for one-half of LDH too due to jury duty! The Executive Assistant was tasked to keep an eye on the state and federal updates; however, when asked for the report, the Executive Assistant was not forthcoming:

A black cat curled up on a yellow and green blanket.
While we catch up from a very busy week of updates, let’s talk about checklists.

Many of us use checklists each day, either as a to-do list, or to confirm that everything is in place before opening a library, or launching a new online service. Checklists can help prioritize and direct focus on otherwise large nebulous encompassing things, making sure that the important bits are not overlooked.

When we talk about privacy, many folks become overwhelmed as to what they should be doing at work to protect patron privacy. Libraries, in particular, have many bases to cover when it comes to implementing privacy best practices, ranging from electronic resources, public computing, websites, and applications. Where does one start?

In 2016, the ALA Intellectual Freedom Committee published the ALA Library Privacy Guidelines, aimed to help libraries and vendors in developing and implementing best practices surrounding digital privacy and security:

  • E-book Lending and Digital Content Vendors
  • Data Exchange Between Networked Devices and Services
  • Public Access Computers and Networks
  • Library Websites, OPACs, and Discovery Services
  • Library Management Systems
  • Students in K-12 Schools

There is a lot of good information in these guides; however, we run into the same overwhelming feeling when reading all the guides, not knowing where to start. Enter the checklists!

To give folks direction in working through the Library Privacy Guidelines, volunteers from the LITA Patron Privacy Interest Group and the Intellectual Freedom Committee’s Privacy Subcommittee created Library Privacy Checklists for each corresponding Guideline. Each checklist is broken down into three sections:

Priority 1 lists best practices that the majority of libraries and vendors should take with minimal additional resources. These practices are a baseline, the minimal amount that one needs to do to protect patron privacy.

Priority 2 are practices that will require a bit more planning and effort than those in the previous section. These practices can be done with some additional resources, be it in-house knowledge/skills or external vendors or contractors. Depending on the checklist, many libraries and vendors can implement at least one practice in this section, but some might not be able to go beyond this section.

Priority 3 are practices that require a higher level of technical skill and resources to implement. For those libraries and vendors that have the available resources, this section gives guidance as to where to focus those resources.

These checklists break the ALA Library Privacy Guidelines down into prioritized, actionable tasks for libraries and organizations to use when trying to align themselves with the Guidelines. The prioritization helps those organizations with limited resources to focus on core best privacy practices as well as giving more resourced organizations guidance as to where to go next in their privacy efforts. These checklists can also be used as a foundation for conversations about overall privacy practices at an organizational level, which could turn into a comprehensive privacy program review. There are many ways one can use these checklists at their organization!

The checklists were published in 2017; nevertheless, even though the technological landscape rapidly changes year to year, many of the practices in the checklists are still good practices to follow in 2019. Take some time today to visit revisit the checklists, and think about how those checklists can help you address some of your organization’s privacy questions or issues.