It’s Dangerous to Go Alone

A cross stitch of a pixelated which old man with a white beard flanked by two pixelated fires. A pixelated sword lies in front of the old man. Text in white above the scene "It's dangerous to go alone. Take This."
Image source: https://www.flickr.com/photos/12508267@N00/31229743046/ (CC BY 2.0)

Juan saw his recent promotion to Director of Access Services at Nebo University Libraries as an opportunity to change his library’s approach to patron privacy. However, Juan knew that becoming a manager of one of the largest departments in the libraries would not altogether remove the roadblocks he kept running into when he advocated for more robust privacy policies and practices as a staff member. Juan now had to figure out how to use his new position to advocate for the privacy changes he had been pushing for a long time…

Juan was one of the four fictional library workers introduced to participants in a recent library privacy workshop. Unlike the other three library workers, Juan was in a unique position. Instead of addressing privacy concerns with other academic departments or campus members, Juan focused on the library itself. When he was still staff, Juan had some limited success in getting better privacy protections at the library. Like many others, Juan ran into organizational roadblocks when changing privacy practices on a larger scale. Newly promoted and with new administrative political capital in the library, Juan thinks he’s in a better position to push for privacy changes throughout the entire library system.

However, Juan is not considering one essential thing – it takes much more than one person in a library to create a sustainable culture of privacy. Many of us have been in the same situation as Juan in going out on our own and pushing for privacy changes in our libraries. We do this on top of everything else that we are responsible for in our daily duties. Sometimes we rationalize this additional workload by bending and stretching existing job responsibilities without formally accommodating the new responsibilities. Other times, we deem privacy work so important that we are willing to sacrifice a portion of our well-being to ensure our patrons are protected (hello Vocational Awe). This might gain us a couple of small wins in the short term: a change in a departmental procedure or reducing the amount of data collected by a patron-facing application or system. However, the long-term reality is that these changes are not set up to be maintained because there is no sustainable system in place. Unless, of course, we as individuals decide to take on that maintenance – but even then, one person can only take on so much on top of their existing workload before everything starts to fall apart.

Creating sustainable privacy practices and programs in organizations requires at minimum two things: dedicated resources and dedicated people. Most libraries do not have these things, relying on existing staff and resources to make privacy happen. While libraries have historically been able to operate with this organizational kludge, changes to library operations and services in the last few decades have made this kludge not only ineffective but dangerous to both patrons and the library as an organization with regard to privacy risk and potential harms if those risks are realized. It is nearly impossible for patrons not to generate data in their library use, be it physical or online. Because so much of this generated data is collected by the library and third parties, even the routine act of trying to document the lifecycle of this data can be a monumental task if there is no dedicated structure in place for this work to be done sustainably.

Like many of us, Juan wants to protect patron privacy. Nevertheless, if he tries to go it alone and does not build the infrastructure to sustain privacy practices, his efforts will be short-lived at best. Privacy policies and procedures are part of that infrastructure, but they’re a part of the infrastructure that is dependent on the dedicated staff time and resources that are critical for sustainable practices. What are some of Juan’s options?

  • Create a centralized library data governance committee – Juan can’t do this work alone, particularly when his primary job responsibilities don’t include overseeing the library’s privacy practices. Creating a data governance committee would bring in both administration and staff from different areas of the library that work or use patron data to oversee data management, including data privacy and security. This committee would not only create and review privacy policies and procedures but would also serve as an accountability mechanism for when things go wrong or to ensure things get done. No one library worker would be solely responsible for the library’s privacy practices in this option, though Juan would need to ensure that participation in the committee does not become an undue burden for staff.
  • Advocate for a dedicated budget line for data privacy and security – There might already be data privacy and security resources available at the university, but those resources might not cover library-specific needs such as professional development for privacy training, consulting, or auditing. Some departments in the library might already have a dedicated budget line for privacy and security, such as Library Systems. Juan might want to talk to the department managers to determine if there might be a chance to collaborate in increasing funds to help fund data privacy and security activities in the library.
  • Advocate for a dedicated privacy staff position in the library – Even with a library data governance committee, ultimately, someone has to wrangle privacy at the library. Juan’s role might include some oversight of some privacy practices in Access Services; unless his job description changes, he cannot be the privacy point person for the entire library. Having a dedicated point person for privacy at the library would ensure that the data governance committee is kept on track in terms of being the data steward for the group. More importantly, it would also ensure that at least one person in the library has dedicated time and resources to track, manage, and address new and evolving data privacy risks and harms patrons face while using the library. While a full-time dedicated position to privacy is ideal, the budget might not support a new position at the time of the request. In that case, Juan might argue that he could be the privacy point person under the condition that he can shift his current responsibilities to other managers in Access Services. Nevertheless, Juan’s suggestion should only be a short-term workaround while the library works to find funding for a full-time privacy position.

All three options require some form of collaboration and negotiation with the administration and staff. Juan cannot realistically create these structures alone if he wants these structures to survive. It comes back to creating and maintaining relationships in the organization. Without these relationships, Juan is left on his own to push for privacy, which inevitably leads to burnout. No matter how passionate we are about patron privacy, like Juan, we must realize that we must not do our privacy work alone if we want our efforts to succeed.

FUD and Reality – Information Security and Open Source Software

A black cat and a grey tabby cat sit on top of a gray computer monitor. The top border of the monitor has a black and white sticker with the text "I <3 source code."
Image source: https://www.flickr.com/photos/miz_curse_10/1404420256/ (CC BY SA 2.0)

Librarians like our acronyms, but we’re not the only profession to indulge in linguistic gymnastics. The technology field is awash in acronyms: HTTP, AWS, UI, LAN, I/O, etc. etc. etc. One acronym you might know from working in libraries, though, is OSS – Open Source Software.

Library technology is no stranger to OSS. The archived FOSS4LIB site lists hundreds of free and open source library applications and systems ranging from integrated library systems and content management systems to metadata editing tools and catalogs. Many libraries use OSS not specific to libraries – a typical example is installing Firefox and Libre Office on public computers. Linux and its multitude of distributions ensure that many library servers and computers run smoothly.

It’s inevitable, though, that when we talk about OSS, we run into another acronym – FUD, or Fear, Uncertainty, and Doubt. FUD is commonly used to create a negative picture of the target in question, usually at the gain of the person making the FUD. In the technology world, OSS often is depicted by proprietary software companies as being inferior to proprietary software – the Microsoft section in the FUD Wikipedia page gives several good examples of such FUD pieces.

It should be no surprise that FUD exists in the library world as well. One example comes from a proprietary software company specializing in library management systems (LMS). We’ll link to an archived version of the page if the page is taken down soon after this post is published; if nothing else, companies do not like being called out on their marketing FUD. The article poses as an article talking about the disadvantages of an LMS. In particular the company claims that OS LMSes are not secure: they can be easily breached or infected by a computer virus, or you can even lose all your data! The only solution to addressing all these disadvantages is to have the proprietary software company handle all of these disadvantages for you!

The article is a classic example of OSS FUD – the use of tactics to sow fear, hesitation, or doubt without providing a reasoned and well-supported argument about the claims made in the article. However, this is probably not the first time you ran into the idea that OSS is insecure. A talking point about OSS insecurity is OSS security bugs stay unaddressed in the software for years. For example, the Heatbleed bug that caused so much havoc in 2014 was introduced into the OpenSSL code in 2012, resulting in a two-year gap where bad actors could exploit the vulnerability. You’ve also probably run into various versions of the thinking around OSS security that Bruce Schneier describes below:

“Open source does sound like a security risk. Why would you want the bad guys to be able to look at the source code? They’ll figure out how it works. They’ll find flaws. They’ll — in extreme cases — sneak back-doors into the code when no one is looking.”

OSS is open for all to use, but it’s also available for all to exploit if you go down the path described in the above line of thinking.

The good news is that, despite the FUD, OSS is not more insecure than its proprietary counterparts. However, we also must be weary of the unchecked optimism in statements claiming that OSS is more secure than proprietary software. The reality is that OS and proprietary software are subject to many of the same information security risks mixed with the unique risks that come with each type of software. It’s not uncommon for a small OSS project to become dormant or abandoned, leaving the software vulnerable due to a lack of updates. Conversely, a business developing proprietary software might not prioritize security tests and fixes in its work, leaving their customers vulnerable if someone exploits a security bug. While there are differences between the two examples, both share the risk of threat actors exploiting unaddressed security bugs in the software.

OSS, therefore, should be assessed and audited like its proprietary counterparts for security (and privacy!) practices and risks. The nature of OSS requires some adjustment to the audit process to consider the differences between the two types of software. A security audit for OSS would, for example, take into account the health of the project: maintenance and update schedules, how active the community is, what previous security issues have been reported and fixed in the past, and so on. Looking at the dependencies of the OSS might uncover possible security risks if a dependency is from an OSS project that is no longer maintained. Addressing any security issues that might arise from an audit could take the form of working on and submitting a bug fix to the OSS project or finding a company that specializes in supporting OSS users that can address the issue. As we wrap up Cybersecurity Awareness Month in the runup to Halloween, let’s get our scares from scary movies and books and not from OSS FUD.

Cybersecurity Awareness Month News Update: School Cybersecurity, Passwords, and Crying “Hack!”

A small gray tabby kitten paws at the Mac laptop screen displaying the Google search home page, with its hind paws standing on the keyboard.
Image source: https://www.flickr.com/photos/tahini/5810915356/ (CC BY 2.0)

There’s never a dull moment in Cybersecurity Awareness Month, with last week being no exception. Here are some news stories you might have missed, along with possible implications and considerations for your library.

K-12 cybersecurity bill signed into law

You might remember reading about a new federal cybersecurity bill being signed into law. You remembered correctly! On October 8th, the K-12 Cybersecurity Act of 2021 was signed into law. The Act doesn’t have a set of standards to comply with for schools looking for such a list. Instead, the Act tasks the Cybersecurity and Infrastructure Security Agency (CISA) to study cybersecurity risks in K-12 educational institutions and what practices would best mitigate cybersecurity risks. The recommendations will be published along with a training toolkit for schools to use as a guide to implement these recommendations at their institution.

School libraries collect and store student data in several ways – the most common example being the patron record in the ILS. School libraries also heavily rely on third-party content providers, which in turn collect additional student data on both the library’s side and the third-party vendor’s side. School library workers, stay tuned for updates on the study and recommendations! While it’s unsure if the study will include school library systems and considerations into assessing cybersecurity risks, it’s more than likely that any recommendations that come from the study will affect school libraries.

Sharing all the passwords

You should be using a password manager. You might already be using one for your personal accounts, but are you using a password manager for work? If you’re still sharing passwords with your co-workers through spreadsheets or pieces of paper, it’s past time for your library to use a password manager. Several password managers, such as LastPass and Bitwarden, have business or enterprise products that are well-suited for managing passwords in the office. Not all password managers can share passwords and other sensitive information outside of the app, particularly if the other person doesn’t have an account with the same manager that you are using. There will be times where you want to share a password with someone outside your organization – a typical example is when a vendor needs to log into a system or app to troubleshoot an issue. But, for the most part, the password manager only supports secured sharing between people with accounts in the organization, so you’re stuck with sharing passwords in less secure ways.

However, if you are a 1Password user or your library uses 1Password’s business product, you no longer have this problem! 1Password users can now send account login information – including passwords – to anyone, including those who do not have a 1Password account. This new feature allows 1Password users to create a sharable link, with options to restrict sharing to specific people (email addresses) and when the link expires (anywhere between 30 days to after one person views the link)— no more calling the vendor, no more having staff email passwords in plaintext. Nonetheless, if your library wants to make use of this new feature, it’s best to give staff guidance as to how to create the links, including how to restrict access and expiration settings, along with training and documentation.

When a “hack” isn’t a hack

This news update is more of a “cybersecurity education 101” than news, considering the level of 🤦🏻‍♀️ this story contains. A very brief overview of what happened in Missouri last week:

  1. A reporter from the St. Louis Post-Dispatch found that a Department of Elementary and Secondary Education’s website contained the social security numbers (SSNs) of school teachers and administrators for the public to access through the HTML source code for the site.
  2. The newspaper notified the department about the security flaw, and the department took down the site in question.
  3. After the site was taken down, the newspaper published a story about the exposed SSNs on the now-defunct site.

Okay, so far, so good. Someone found a serious cybersecurity issue on a government website, reported it to the department, and waited to talk about the issue until the issue was addressed publicly. That’s pretty standard when it comes to disclosing security flaws. Let’s move on to the next item in the summary.

  1. The Governor of Missouri and other government officials responded to the disclosure, saying the reporter was a hacker and that the “hacker took the records of at least three educators, decoded the HTML source code, and viewed the social security number of those specific educators.”

🤦🏻‍♀️

There is a difference between hacking and exposing personal data on a publicly accessible website. The system was hacked if the reporter bypassed security measures to obtain sensitive data in an internal system, such as using stolen account logins to access the system. If the reporter clicks on the “View Source” menu option in their browser and finds sensitive data right in the source code of a publicly accessible website, you have a security vulnerability resulting in a data leak!

The takeaways from this story:

  1. Do not hard-code sensitive data in your code. This includes passwords for scripts that need to access other systems or databases.
  2. Review locally-developed and third-party applications that work with sensitive data for potential data leaks or other ways unauthorized people can improperly access the data.
  3. Do not punish the people who bring security issues to your attention! Like we discussed in our Friendly Phishing post, punitive actions can lead to a reduction in reporting, which increases security and privacy risks. Other reporters or private citizens who are watching the Governor take action against the reporter might be dissuaded from reporting additional data security or privacy issues to the state government, increasing the chance that these issues will be exploited by bad actors.
  4. If the data was sitting on a publicly available site for someone to access via F12 or View Source on their browser, it is not a hack. Let this be a lesson learned, lest you want to end up being ratio’ed like the Governor of Missouri on Twitter.

Information Security, Risk, and Getting in One’s Own Way

Maru demonstrating how certain information security measures can ultimately backfire and put the organization at risk if the measures add too many barriers for the user to go about their work. Source – https://twitter.com/rpteixeira/status/1176903814575796228

Let’s start this week’s Cybersecurity Awareness Month post with a phrase that will cause some of you to scream into the void and others to weep at your work desk quietly:

Admin privileges on work computers.

Rationing admin privileges on work computers is one example of an information security practice that both protects and puts data at risk. Limiting the worker’s ability to install a program on their work computer reduces the chances of the system falling to a cyberattack via malware. It also reduces the chances of critical operations or process failure if an app downloaded without IT support breaks after an OS update or patch. On the other hand, limiting admin privileges can motivate some workers to work around IT, particularly if IT has consistently denied requests for privileges or installing new tools or if the request process resembles something that only a Vogon would conceive of.  These workarounds put data at risk when staff work around IT to use third-party software with which the library has no contractual relationship or vendor security oversight. No contractual relationship + no evaluation of third-party privacy policies or practices = unprotected data.

IT is often their own worst enemy when it comes to information security. Staff don’t like barriers, particularly ones they see as arbitrary or prevent them from doing their jobs. Each information security policy or practice comes with a benefit and a cost in terms of risk. Sometimes these practices and standards have hidden costs that wipe out any benefit they offer. In the example of computer admin privileges, restrictions might lead workers to use personal computers or use third-party applications that the organization hasn’t vetted.  We have to calculate that risk with the benefit of reducing the chances of malware finding its way into the system.

The benefit-cost calculation comes back to the question of barriers, particularly what they are, how your policies and processes contribute to them, and the solutions or workarounds to navigate those barriers. Answering this question requires us to revisit the risk equation of calculating the cost or impact of a threat exploiting a vulnerability and how one can address the risk. By eliminating one risk through the barrier of disallowing admin privileges for staff computers, the organization accepts the multitude of risks that come with staff using personal devices or third-party applications or systems to work around the barrier.

Some barriers (for example, requiring authentication into a system that stores sensitive data) are necessary to reduce risk and secure data. The hard part comes in determining which barriers will not cost the organization more in the long run. In the case of admin privileges, we might consider the following options:

  • Creating two user accounts for each staff person: a regular account used for daily work and one local administrator account used only to install applications. The delineation of accounts mitigates the risk of malware infecting the local computer if the staff person follows the rules for when to use each account. The risk remains if the staff person uses the same password for both accounts or uses the admin account for daily work. Password managers can limit risks associated with reused passwords.
  • Creating a timely and user-friendly process for requesting and installing applications on work computers. This process has many potential barriers that might prevent staff from using the process, including:
    • long turnaround times for requests
    • lack of transparency with rejected requests (along with lack of alternatives that might work instead)
    • unclear or convoluted request forms or procedures (see earlier Vogon reference)

These barriers can be addressed through careful design and planning involving staff. Nevertheless, some staff will interpret any request process as a significant barrier to complete their work.

Each option has some interruptions to staff workflow; however, these barriers can be designed so that the security practices are not likely to become a risk within themselves. We forget at times that decisions around information security also need to consider the impact these decisions will have on the ability of staff to perform their daily duties. It’s easy to get in our own way if we forget to center the end-user (be it patrons or fellow library workers) in what we decide and what we build. Keeping the risk trade-offs in mind can help make sure we don’t end up tripping ourselves up trying to protect data one way, only to have it unprotected in the end.

Just Published – Data Privacy and Cybersecurity Best Practices Train-the-Trainer Handbook

Cover of the "Data Privacy and Cybersecurity Best Practices Train-the-Trainers Handbook".

Happy October! Depending on who you ask at LDH, October is either:

  1. Cybersecurity Awareness Month
  2. An excuse for the Executive Assistant to be extra while we try to work
  3. The time to wear flannel and drink coffee nevermind, this is every month in Seattle

Since the Executive Assistant lacks decent typing skills (as far as we know), we declare October as Cybersecurity Awareness Month at LDH. Like last year, this month will focus on privacy’s popular sibling, security. We also want to hear from you! If there is an information security topic you would like us to cover this month (or the next), email us at newsletter@ldhconsultingservices.com.

We start the month with a publication announcement! The Data Privacy and Cybersecurity Training for Libraries, an LSTA-funded collaborative project between the Pacific Library Partnership, LDH, and Lyrasis, just published two library data privacy and cybersecurity resources for library workers wanting to create privacy and security training for their libraries:

  • PLP Data Privacy and Cybersecurity Best Practices Train-the-Trainer Handbook – The handbook is a guide for library trainers wanting to develop data privacy and cybersecurity training for library staff. The handbook walks through the process of planning and developing a training program at the library and provides ideas for training topics and activities. This handbook is a companion to the Data Privacy Best Practices Toolkit for Libraries published last year.
  • PLP Data Privacy and Cybersecurity Best Practices Train-the-Trainer Workshops (under the 2021 tab) – If you’re looking for train-the-trainer workshop materials, we have you covered! You can now access the materials used in the two train-the-trainer workshops for data privacy and cybersecurity conducted earlier this year. Topics include:
    • Data privacy – data privacy fundamentals and awareness; training development basics; vendor relations; patron programming; building a library privacy program
    • Cybersecurity – cybersecurity basics; information security threats and vulnerabilities; how to protect the library against common threats such as ransomware and phishing; building cybersecurity training for libraries

Both publications include extensive resource lists for additional training materials and to keep current with the rapid changes in cybersecurity and data privacy in the library world and beyond. Feel free to share your training stories and materials with us – we would love to hear what you all come up with while using project resources! We hope that these publications, along with the rest of the project’s publications, will make privacy and cybersecurity training easier to create and to give at your library.

Is Library Scholarship a Privacy Information Hazard?

A white hazard sign with an image of a human stick figure being sapped by a electric blob. Image is sandwiched between red and black text - "Warning, this area is dangerous"
Image source: https://www.flickr.com/photos/andymag/9349743409/ (CC BY 2.0)

Library ethics, privacy, and technology collided again last week, this time with the publication of issue 52 of the Code4Lib Journal. In this issue, the editorial committee published an article describing an assessment process with serious data privacy and ethical issues and then explained their rationale for publishing the article in the issue editorial. The specifics of these data privacy and ethical issues will not be covered in-depth in this week’s newsletter – you can read about said issues in the comment section of the Code4Lib Journal article in question.

You might have noticed that we said “again” in the last paragraph. This isn’t the first time library technology publications and patron privacy collided. The Code4Lib Journal published a similarly problematic article last year, but the journal is one of many library scholarship venues that have published scholarly and practical literature that are ethically problematic with regard to patron privacy. Technology and assessment are the usual offenders, ranging from case studies of implementing privacy-invasive technologies to research extolling the benefits of surveilling students in the name of learning analytics without discussing the implications of violating student patron privacy. These publications are not set up as a point-counterpoint exploration of these technologies and assessment methods in terms of privacy and ethics. Instead, these publications are entered into the scholarly record as is, with an occasional contextual note or superficial sentence or two about privacy. Retraction is almost unheard of in library scholarship, and retraction is not very effective in addressing problematic research.

Library scholarship is not consistently aligned with the profession’s ethical standards to uphold patron privacy and confidentiality. Whether or not an article is judged on its potential impact on library privacy is currently up to the individual peer reviewer (or in the case of editor-reviewed journals such as Code4Lib, the editor). In addition, library scholarship is not set up to assess the potential privacy risks and harms of the publication in question to specific patron groups, particularly patrons from minoritized populations. Currently, there is no suitable mechanism to do such an assessment that can be included in the original publication so that it would be both meaningful and informative to the reader. We are left with publications in the library scholarship record that promote the uncritical adoption of high-risk practices that go against professional ethics and harm patrons. This becomes more perilous when these publications come across those in the field who do not have the knowledge or experience in assessing these publications with patron privacy and ethics in mind.

What we end up with, therefore, is a scholarly record full of information hazards. An information hazard is a particular piece of information that can potentially cause harm to the knower or create the potential to harm others. This differs from misinformation where the information being spread is false, whereas the truthfulness of the information hazard is intact. Nick Bostrom’s seminal work on information hazards breaks down the specific risks and harms of different types of hazards. Library scholarship has (at least) two information hazards in particular when it comes to library privacy and ethics:

Idea hazard – Ideas hold power. They also come with risks. Even if the dissemination of an idea is kept at a high level without specific details, it can become an idea hazard. The idea that a library can use a particular system or process to assess library use can risk patron privacy. There are ways to mitigate an idea risk of this nature, including evaluating the assessment idea through the Five Whys method or other methods to determine the root need for such an assessment.

Development hazard – A development hazard is when advancement in a field of knowledge leads to technological or organizational capabilities that create negative consequences. Like other fields of technology, library technology falls into this hazard category, particularly when combined with the evolution of library assessment practices and norms. Sharing code and processes (which is a data hazard) can lead to community or commercial development of more privacy-invasive library practices if no care is taken to mitigate patron privacy risks.

How, then, can library scholarship become less of a privacy information hazard? First and foremost, the responsibility falls on the publishers, editors, peer reviewers, and conference program organizers who control what is and is not added to the library scholarly record. This includes creating a code of ethics for submission authors to follow and guidelines for reviewers and editors to follow to assess the privacy and ethical implications of the submission. However, these codes and guidelines are not effective if they are not acted upon. As Dorothea Salo says, “Research on library patrons that contravenes library-specific ethics is unethical; it should not be published in the LIS literature, and when published there, should be retracted.” Regardless of the novelty or other technical merits of the submission, if the submission violates or goes against library ethics or privacy standards, the editors, reviewers, and publishers have the responsibility as shapers of the scholarly record to not publish the submission lest they add yet another information hazard to the record.

Library privacy and ethics must also be a part of every stage of the submission and publication process. This takes a page from Privacy by Design, taking a proactive approach to privacy instead of rushing to include privacy at the last minute, making any privacy effort ineffective at best. Ethical codes and guidelines are one way to embed privacy into a process; another is to include checkpoints in the process to bring in external subject matter experts to review submissions well in advance to identify or comment on specific privacy or ethical risks. If done early in the submission process, the information received can then be used to revise the submission to address these issues or to change the focus of the submission to one that is more appropriate to address the privacy and ethical implications of the topic at hand. The submission itself doesn’t have to be abandoned, but it must be constructed so that the privacy and ethical risks are front and center, describing why this method, idea, process, or code goes against library ethics and privacy. This option doesn’t eliminate the idea/data hazard, but shifting the focus on privacy and ethical repercussions can mitigate the risks that come with such hazards.

Whether intentional (as in the case of the latest Code4Lib Journal issue) or unintentional, library scholarship places patron privacy at risk through the unrestricted flow of information hazards. Many in the profession face pressure to create a constant stream of scholarship, but at what cost to our patrons’ privacy and professional ethics? A scholarly record full of privacy information hazards has and will continue to have long-lasting implications for the profession’s ability to protect patron privacy as well as how well we can serve everyone in the community (and not just those who have a higher tolerance for privacy risks or won’t be as negatively impacted by poor privacy practices). As the discussion about the Code4Lib Journal’s decision to publish the latest information hazard into the scholarly record continues, perhaps the community can use this time to push for more privacy and ethically-aligned submission and review processes in library scholarship.

Mid-September Readings, Viewings, and Doings

A light brown rabbit sits on top of a keyboard looking up at two computer screens, reading email.
Image source: https://www.flickr.com/photos/toms/127809435/ (CC BY 2.0)

September has proven itself to be a busy month for all of us! This week we’re taking a breather from our usual (longer) posts by highlighting a few resources that you might find of interest, and some homework, to boot.

What to Read

For years there has been a concerted effort in getting libraries to secure their websites through HTTPS, but have those efforts paid off? A recently published article by librarian Gabriel Gardner describes how much further we have to go with HTTPS on library websites, but it doesn’t stop there. The article also describes how libraries are complicit in third-party tracking with various web trackers found on library websites, including (unsurprisingly) Google Analytics. Give this article a read, then hop on over to your library website. How is your library website contributing to surveillance by allowing third parties to vacuum up all the data exhaust your patrons are leaving behind while using the library website? We’ve written about alternatives to Google Analytics and other forms of tracking if you need a place to start in reducing the third-party tracker footprint at your library.

What to Watch/Read

At LDH, we talk a lot about ethics and technology. You might be wondering where you can learn more about the ethics of technology without diving headfirst into a full-time college course. If you have some time to watch a few TikTok videos and read a couple of articles during the week, you’re in luck – Professor Casey Fiesler’s Tech Ethics and Policy class is in session! You can follow along by watching Dr. Fiesler’s TikTok videos and doing the readings posted on Google Docs. But you can do much more than following along – join the office hours or the discussions in the videos!

What to Do

Perhaps you’re looking for something else to do other than website or ethics classwork. We won’t hold that against you (though we really, really recommend reviewing what trackers your library website has). So, here’s a suggestion for your consideration. It’s been a while since we did our #DataSpringCleaning. Do you dread cleaning because there’s always so much stuff to deal with by the time we get around to doing it? Taking five to ten minutes now to dispose of patron data securely can go a long way to reducing the amount of data you have to deal with during the annual #DataSpringCleaning. It’s also an excellent privacy and security hygiene habit to adopt. Spending a few minutes to secure sensitive data can fill in the gaps in your schedule between meetings or projects, or it can be part of your routine for starting or ending your workday. And it does give you some feeling of accomplishment on particularly frustrating days where nothing seems to have gotten done.

If you come across any library privacy-related resources that you would like highlighted in the newsletter, let us know by emailing newsletter@ldhconsultingservices.com. In the meantime, best of luck with the workweek, and we’ll catch you next week.

The Lasting Impact of The Patriot Act on Libraries

A man wearing sunglasses holds a white sign as he walks through a street protest. The sign has two human eyes looking up and to the right. The sign message - 'The "Patriot" Act is watching you"
Image source – https://flickr.com/photos/crazbabe21/2303197115/ (CC BY 2.0)

This weekend marked the 20th anniversary of 9/11 in the US. Life changed in the US after the attacks. One of the many aspects of our lives that changed was the sudden erosion of privacy for everyone living in the States. One of the earliest visible examples of this rapid erosion of privacy was the Patriot Act. Let’s take a moment and revisit this turning point in library privacy history and what has happened since.

A Quick Refresher

The Patriot Act was signed in October 2001 after the attacks of September 11th. The law introduced or vastly expanded government surveillance programs and rights. US libraries are most likely familiar with Section 215. While in the past the government was limited in what information they could obtain through secret FISA orders, Section 215’s “tangible things” expanded the use of these secret orders to “books, records, papers, documents, and other items.” Given the examples included in the Section’s text, it wasn’t too much of a stretch to assume that “tangible things” included library records.

The good news – for now – is that Section 215 is not here to mark the 20th anniversary of the passage of the Patriot Act. The Section was sunsetted in 2020 after years of renewal and a second life through the USA Freedom Act. The Section did not die quietly, though – while support for renewal spanned across both parties in the Senate and the House, different versions of the renewal bill stalled the renewal process. The possibility of a renewal of Section 215 or a similar version of the Section is still present. However, it is unclear as to when talks of renewal will restart.

The Act’s Impact on Libraries

Libraries acted quickly after the passage of the Act. Right after the passage of the Patriot Act, those of us in the library profession might remember taking stacks of borrowing histories and other physical records containing patron data and sending them through the shredder. Other libraries adjusted privacy settings in their ILSes and other systems to not collect borrowing history by default. ALA promptly sent out guidance for libraries around updating privacy and law enforcement request policies and procedures. And it would be safe to assume that several people got into librarianship because of the profession’s efforts in protecting privacy and pushing back against the Patriot Act.

Even with the flurry of activity in the profession early on, questions about the use of Section 215 to obtain patron data persist today. Even though the Justice Department testified in 2011 that Section 215 was not used to obtain circulation records, the secrecy imposed on searches in Section 215 makes it difficult to determine precisely the extent of the Section’s library record collection activities.

While we cannot say for sure if Section 215 was used to obtain patron data, we know that other parts of the Act were used in an attempt to get patron data. Most notably was the use of National Security Letters (NSL) and gag orders by the government to obtain patron data. The Connecticut Four successfully challenged the gag order on an NSL served to the Connecticut library consortium Library Connection. While the Connecticut Four took their fight to court, other libraries proactively tried to work around the gag order by posting warrant canaries in the building to notify patrons if they had been served an NSL.

Lessons Learned or Business as Usual?

The Patriot Act reminded libraries of the threat governments pose to patron privacy. Libraries responded with considerable energy and focus to these threats, and these responses defined library privacy work in the 21st century library. Still, the lessons learned from the early days of the Act didn’t entirely transfer to other threats that pose as much of a threat to patron privacy as governments and law enforcement. While libraries could quickly dispose of risky patron data on paper after the Act’s passage, a substantial amount of today’s patron data lives on third-party databases and systems. The removal of control over patron data in third-party systems limits the ability to adjust to new privacy threats quickly. Technology has evolved to provide some possible protections, including encryption and other ways to restrict access to data. Legal regulations around privacy give both libraries and patrons some level of control over data privacy in third-party systems. Despite these progressions in technology and law, data privacy in the age of surveillance capitalism in the library brings new challenges that many libraries struggle to manage.

Some could argue that libraries sub-optimized data privacy protections in response to the Act’s threats, hyper-focusing on government and law enforcement at the expense of addressing other patron privacy risks. At the same time, the standards and practices developed to mitigate governmental threats to patron privacy can be (and to certain extents have been) adapted to minimize these other risks, particularly with third parties. One of the first lessons learned in the initial days of the Act came from the massive efforts of shredding and disposing of patron data in bulk in libraries throughout the country. Libraries realized at that moment that data collected is data at risk of being seized by the government. Data can’t be seized if it doesn’t exist in the first place. As libraries continue to minimize risks around law enforcement requests, we must remember to extend those privacy protections to the third parties that make up critical library operations and services.

A Short Reflection on Uncertainty and Risk

A white woman standing with her back to the beach in front of waves coming to shore. A yellow sign in the foreground has an illustration of a shark and states "Shark sighted today - enter water at own risk".
Photo by Lubo Minar on Unsplash

We made it! We’re coming to you from our new server home. We’re still settling in, so please let us know if you come across something that isn’t quite working on the website. If you are one of our email subscribers and find this post in your spam box, you can add newsletter@ldhconsultingservices.com to your contacts list to help prevent future emails from being banished to the spam folder.

Now that the dust has settled, we regret to inform you that summer is almost over. Schools are back in session, summer reading programs are wrapping up for the season, and a new batch of LIS students are starting their first semester of library school. We also regret to inform you that the pandemic is still hanging in there, adding its own layer of stress and uncertainty on top of everything else.

Uncertainty is hard to plan for, even in non-pandemic times. Libraries with plans for phasing back in-building services find themselves changing those plans daily to keep up with changes in health ordinances, legal regulations, and parent organizational mandates. We find ourselves back in the first few months of the pandemic, scrambling to figure out what to do. Then again, we haven’t stopped scrambling throughout the pandemic to find ways to provide patrons services that won’t put both patrons and library workers at risk.

Risk assessment and management are exercises in dealing with uncertainty. We like to have neat solutions to neat problems; risk management tells us that problems are much messier and are less likely to be solved with neat solutions. Take, for example, four common responses used in determining how to manage risk:

  • Accept – Choosing to accept the risk, usually done in cases where the cost of the realized risk is less than the cost in addressing the risk
  • Transfer – Shifting the risk to another party (another person, group, or tool) who is better situated to manage the risk
  • Mitigate – Adding checks or controls to limit risk in a particular situation
  • Eliminate – Changing something to remove or avoid the risk

Some of you might be surprised that the last response, eliminate, is not the primary goal in risk management. This is partly due to the level of control we have in the situation that presents the risk. We cannot eliminate some risks due to, well, pandemic, while others are unavoidable due to the nature of our work – where we work, operational needs, external needs/pressures, and so on. In those instances where we cannot entirely eliminate the risk, we can still have some control over our response to the risk, particularly with mitigating or transferring the risk.

While we cannot eliminate all risks in our libraries around the pandemic’s uncertainty, we can still work toward identifying and managing risks that we have more control over, including those risks around patron privacy. Here are a few resources to get you started on managing patron data privacy risks:

By focusing on risks that we are better situated to address through transference, mitigation, and elimination, we can avoid the inertia that comes with being overwhelmed by risks we have less control over. It might seem like arranging the deckchairs on the Titanic, but living with so much uncertainty in such a short time can short-circuit our ability to identify and manage risk, particularly when we are not trained to manage risk during long periods of heightened uncertainty. If you find yourself at that point, you can take advantage of the start of the fall season by resetting the privacy risk management button by making a list of privacy risks outside your control and risks that you or your library are better able to manage. You might not be able to identify all the risks in one sitting, and that’s okay. If you are struggling to identify risks that you or your library can manage, revisit the earlier resources to help you through the process.

Managing risk requires accommodating uncertainty and variations of the same risk. Risk likelihoods and severity can change without notice. Risks also have different severity, harms, and likelihoods for different people – what might be a low harm risk for one person might be a risk that has more significant harms for another. Risk management strategies help wrangle this uncertainty by providing some structure in responding to the uncertain nature of risk. While we can’t eliminate uncertainty, we can be better prepared to manage uncertainty in parts of our lives, such as our work that affects patron privacy.