FUD and Reality – Information Security and Open Source Software

A black cat and a grey tabby cat sit on top of a gray computer monitor. The top border of the monitor has a black and white sticker with the text "I <3 source code."
Image source: https://www.flickr.com/photos/miz_curse_10/1404420256/ (CC BY SA 2.0)

Librarians like our acronyms, but we’re not the only profession to indulge in linguistic gymnastics. The technology field is awash in acronyms: HTTP, AWS, UI, LAN, I/O, etc. etc. etc. One acronym you might know from working in libraries, though, is OSS – Open Source Software.

Library technology is no stranger to OSS. The archived FOSS4LIB site lists hundreds of free and open source library applications and systems ranging from integrated library systems and content management systems to metadata editing tools and catalogs. Many libraries use OSS not specific to libraries – a typical example is installing Firefox and Libre Office on public computers. Linux and its multitude of distributions ensure that many library servers and computers run smoothly.

It’s inevitable, though, that when we talk about OSS, we run into another acronym – FUD, or Fear, Uncertainty, and Doubt. FUD is commonly used to create a negative picture of the target in question, usually at the gain of the person making the FUD. In the technology world, OSS often is depicted by proprietary software companies as being inferior to proprietary software – the Microsoft section in the FUD Wikipedia page gives several good examples of such FUD pieces.

It should be no surprise that FUD exists in the library world as well. One example comes from a proprietary software company specializing in library management systems (LMS). We’ll link to an archived version of the page if the page is taken down soon after this post is published; if nothing else, companies do not like being called out on their marketing FUD. The article poses as an article talking about the disadvantages of an LMS. In particular the company claims that OS LMSes are not secure: they can be easily breached or infected by a computer virus, or you can even lose all your data! The only solution to addressing all these disadvantages is to have the proprietary software company handle all of these disadvantages for you!

The article is a classic example of OSS FUD – the use of tactics to sow fear, hesitation, or doubt without providing a reasoned and well-supported argument about the claims made in the article. However, this is probably not the first time you ran into the idea that OSS is insecure. A talking point about OSS insecurity is OSS security bugs stay unaddressed in the software for years. For example, the Heatbleed bug that caused so much havoc in 2014 was introduced into the OpenSSL code in 2012, resulting in a two-year gap where bad actors could exploit the vulnerability. You’ve also probably run into various versions of the thinking around OSS security that Bruce Schneier describes below:

“Open source does sound like a security risk. Why would you want the bad guys to be able to look at the source code? They’ll figure out how it works. They’ll find flaws. They’ll — in extreme cases — sneak back-doors into the code when no one is looking.”

OSS is open for all to use, but it’s also available for all to exploit if you go down the path described in the above line of thinking.

The good news is that, despite the FUD, OSS is not more insecure than its proprietary counterparts. However, we also must be weary of the unchecked optimism in statements claiming that OSS is more secure than proprietary software. The reality is that OS and proprietary software are subject to many of the same information security risks mixed with the unique risks that come with each type of software. It’s not uncommon for a small OSS project to become dormant or abandoned, leaving the software vulnerable due to a lack of updates. Conversely, a business developing proprietary software might not prioritize security tests and fixes in its work, leaving their customers vulnerable if someone exploits a security bug. While there are differences between the two examples, both share the risk of threat actors exploiting unaddressed security bugs in the software.

OSS, therefore, should be assessed and audited like its proprietary counterparts for security (and privacy!) practices and risks. The nature of OSS requires some adjustment to the audit process to consider the differences between the two types of software. A security audit for OSS would, for example, take into account the health of the project: maintenance and update schedules, how active the community is, what previous security issues have been reported and fixed in the past, and so on. Looking at the dependencies of the OSS might uncover possible security risks if a dependency is from an OSS project that is no longer maintained. Addressing any security issues that might arise from an audit could take the form of working on and submitting a bug fix to the OSS project or finding a company that specializes in supporting OSS users that can address the issue. As we wrap up Cybersecurity Awareness Month in the runup to Halloween, let’s get our scares from scary movies and books and not from OSS FUD.

Cybersecurity Awareness Month News Update: School Cybersecurity, Passwords, and Crying “Hack!”

A small gray tabby kitten paws at the Mac laptop screen displaying the Google search home page, with its hind paws standing on the keyboard.
Image source: https://www.flickr.com/photos/tahini/5810915356/ (CC BY 2.0)

There’s never a dull moment in Cybersecurity Awareness Month, with last week being no exception. Here are some news stories you might have missed, along with possible implications and considerations for your library.

K-12 cybersecurity bill signed into law

You might remember reading about a new federal cybersecurity bill being signed into law. You remembered correctly! On October 8th, the K-12 Cybersecurity Act of 2021 was signed into law. The Act doesn’t have a set of standards to comply with for schools looking for such a list. Instead, the Act tasks the Cybersecurity and Infrastructure Security Agency (CISA) to study cybersecurity risks in K-12 educational institutions and what practices would best mitigate cybersecurity risks. The recommendations will be published along with a training toolkit for schools to use as a guide to implement these recommendations at their institution.

School libraries collect and store student data in several ways – the most common example being the patron record in the ILS. School libraries also heavily rely on third-party content providers, which in turn collect additional student data on both the library’s side and the third-party vendor’s side. School library workers, stay tuned for updates on the study and recommendations! While it’s unsure if the study will include school library systems and considerations into assessing cybersecurity risks, it’s more than likely that any recommendations that come from the study will affect school libraries.

Sharing all the passwords

You should be using a password manager. You might already be using one for your personal accounts, but are you using a password manager for work? If you’re still sharing passwords with your co-workers through spreadsheets or pieces of paper, it’s past time for your library to use a password manager. Several password managers, such as LastPass and Bitwarden, have business or enterprise products that are well-suited for managing passwords in the office. Not all password managers can share passwords and other sensitive information outside of the app, particularly if the other person doesn’t have an account with the same manager that you are using. There will be times where you want to share a password with someone outside your organization – a typical example is when a vendor needs to log into a system or app to troubleshoot an issue. But, for the most part, the password manager only supports secured sharing between people with accounts in the organization, so you’re stuck with sharing passwords in less secure ways.

However, if you are a 1Password user or your library uses 1Password’s business product, you no longer have this problem! 1Password users can now send account login information – including passwords – to anyone, including those who do not have a 1Password account. This new feature allows 1Password users to create a sharable link, with options to restrict sharing to specific people (email addresses) and when the link expires (anywhere between 30 days to after one person views the link)— no more calling the vendor, no more having staff email passwords in plaintext. Nonetheless, if your library wants to make use of this new feature, it’s best to give staff guidance as to how to create the links, including how to restrict access and expiration settings, along with training and documentation.

When a “hack” isn’t a hack

This news update is more of a “cybersecurity education 101” than news, considering the level of 🤦🏻‍♀️ this story contains. A very brief overview of what happened in Missouri last week:

  1. A reporter from the St. Louis Post-Dispatch found that a Department of Elementary and Secondary Education’s website contained the social security numbers (SSNs) of school teachers and administrators for the public to access through the HTML source code for the site.
  2. The newspaper notified the department about the security flaw, and the department took down the site in question.
  3. After the site was taken down, the newspaper published a story about the exposed SSNs on the now-defunct site.

Okay, so far, so good. Someone found a serious cybersecurity issue on a government website, reported it to the department, and waited to talk about the issue until the issue was addressed publicly. That’s pretty standard when it comes to disclosing security flaws. Let’s move on to the next item in the summary.

  1. The Governor of Missouri and other government officials responded to the disclosure, saying the reporter was a hacker and that the “hacker took the records of at least three educators, decoded the HTML source code, and viewed the social security number of those specific educators.”

🤦🏻‍♀️

There is a difference between hacking and exposing personal data on a publicly accessible website. The system was hacked if the reporter bypassed security measures to obtain sensitive data in an internal system, such as using stolen account logins to access the system. If the reporter clicks on the “View Source” menu option in their browser and finds sensitive data right in the source code of a publicly accessible website, you have a security vulnerability resulting in a data leak!

The takeaways from this story:

  1. Do not hard-code sensitive data in your code. This includes passwords for scripts that need to access other systems or databases.
  2. Review locally-developed and third-party applications that work with sensitive data for potential data leaks or other ways unauthorized people can improperly access the data.
  3. Do not punish the people who bring security issues to your attention! Like we discussed in our Friendly Phishing post, punitive actions can lead to a reduction in reporting, which increases security and privacy risks. Other reporters or private citizens who are watching the Governor take action against the reporter might be dissuaded from reporting additional data security or privacy issues to the state government, increasing the chance that these issues will be exploited by bad actors.
  4. If the data was sitting on a publicly available site for someone to access via F12 or View Source on their browser, it is not a hack. Let this be a lesson learned, lest you want to end up being ratio’ed like the Governor of Missouri on Twitter.

Information Security, Risk, and Getting in One’s Own Way

Maru demonstrating how certain information security measures can ultimately backfire and put the organization at risk if the measures add too many barriers for the user to go about their work. Source – https://twitter.com/rpteixeira/status/1176903814575796228

Let’s start this week’s Cybersecurity Awareness Month post with a phrase that will cause some of you to scream into the void and others to weep at your work desk quietly:

Admin privileges on work computers.

Rationing admin privileges on work computers is one example of an information security practice that both protects and puts data at risk. Limiting the worker’s ability to install a program on their work computer reduces the chances of the system falling to a cyberattack via malware. It also reduces the chances of critical operations or process failure if an app downloaded without IT support breaks after an OS update or patch. On the other hand, limiting admin privileges can motivate some workers to work around IT, particularly if IT has consistently denied requests for privileges or installing new tools or if the request process resembles something that only a Vogon would conceive of.  These workarounds put data at risk when staff work around IT to use third-party software with which the library has no contractual relationship or vendor security oversight. No contractual relationship + no evaluation of third-party privacy policies or practices = unprotected data.

IT is often their own worst enemy when it comes to information security. Staff don’t like barriers, particularly ones they see as arbitrary or prevent them from doing their jobs. Each information security policy or practice comes with a benefit and a cost in terms of risk. Sometimes these practices and standards have hidden costs that wipe out any benefit they offer. In the example of computer admin privileges, restrictions might lead workers to use personal computers or use third-party applications that the organization hasn’t vetted.  We have to calculate that risk with the benefit of reducing the chances of malware finding its way into the system.

The benefit-cost calculation comes back to the question of barriers, particularly what they are, how your policies and processes contribute to them, and the solutions or workarounds to navigate those barriers. Answering this question requires us to revisit the risk equation of calculating the cost or impact of a threat exploiting a vulnerability and how one can address the risk. By eliminating one risk through the barrier of disallowing admin privileges for staff computers, the organization accepts the multitude of risks that come with staff using personal devices or third-party applications or systems to work around the barrier.

Some barriers (for example, requiring authentication into a system that stores sensitive data) are necessary to reduce risk and secure data. The hard part comes in determining which barriers will not cost the organization more in the long run. In the case of admin privileges, we might consider the following options:

  • Creating two user accounts for each staff person: a regular account used for daily work and one local administrator account used only to install applications. The delineation of accounts mitigates the risk of malware infecting the local computer if the staff person follows the rules for when to use each account. The risk remains if the staff person uses the same password for both accounts or uses the admin account for daily work. Password managers can limit risks associated with reused passwords.
  • Creating a timely and user-friendly process for requesting and installing applications on work computers. This process has many potential barriers that might prevent staff from using the process, including:
    • long turnaround times for requests
    • lack of transparency with rejected requests (along with lack of alternatives that might work instead)
    • unclear or convoluted request forms or procedures (see earlier Vogon reference)

These barriers can be addressed through careful design and planning involving staff. Nevertheless, some staff will interpret any request process as a significant barrier to complete their work.

Each option has some interruptions to staff workflow; however, these barriers can be designed so that the security practices are not likely to become a risk within themselves. We forget at times that decisions around information security also need to consider the impact these decisions will have on the ability of staff to perform their daily duties. It’s easy to get in our own way if we forget to center the end-user (be it patrons or fellow library workers) in what we decide and what we build. Keeping the risk trade-offs in mind can help make sure we don’t end up tripping ourselves up trying to protect data one way, only to have it unprotected in the end.

Just Published – Data Privacy and Cybersecurity Best Practices Train-the-Trainer Handbook

Cover of the "Data Privacy and Cybersecurity Best Practices Train-the-Trainers Handbook".

Happy October! Depending on who you ask at LDH, October is either:

  1. Cybersecurity Awareness Month
  2. An excuse for the Executive Assistant to be extra while we try to work
  3. The time to wear flannel and drink coffee nevermind, this is every month in Seattle

Since the Executive Assistant lacks decent typing skills (as far as we know), we declare October as Cybersecurity Awareness Month at LDH. Like last year, this month will focus on privacy’s popular sibling, security. We also want to hear from you! If there is an information security topic you would like us to cover this month (or the next), email us at newsletter@ldhconsultingservices.com.

We start the month with a publication announcement! The Data Privacy and Cybersecurity Training for Libraries, an LSTA-funded collaborative project between the Pacific Library Partnership, LDH, and Lyrasis, just published two library data privacy and cybersecurity resources for library workers wanting to create privacy and security training for their libraries:

  • PLP Data Privacy and Cybersecurity Best Practices Train-the-Trainer Handbook – The handbook is a guide for library trainers wanting to develop data privacy and cybersecurity training for library staff. The handbook walks through the process of planning and developing a training program at the library and provides ideas for training topics and activities. This handbook is a companion to the Data Privacy Best Practices Toolkit for Libraries published last year.
  • PLP Data Privacy and Cybersecurity Best Practices Train-the-Trainer Workshops (under the 2021 tab) – If you’re looking for train-the-trainer workshop materials, we have you covered! You can now access the materials used in the two train-the-trainer workshops for data privacy and cybersecurity conducted earlier this year. Topics include:
    • Data privacy – data privacy fundamentals and awareness; training development basics; vendor relations; patron programming; building a library privacy program
    • Cybersecurity – cybersecurity basics; information security threats and vulnerabilities; how to protect the library against common threats such as ransomware and phishing; building cybersecurity training for libraries

Both publications include extensive resource lists for additional training materials and to keep current with the rapid changes in cybersecurity and data privacy in the library world and beyond. Feel free to share your training stories and materials with us – we would love to hear what you all come up with while using project resources! We hope that these publications, along with the rest of the project’s publications, will make privacy and cybersecurity training easier to create and to give at your library.

In Case of Emergency

A pull fire alarm with a sign next to it stating "Exit Alarm Only - this alarm does not summon the fire dept In case of Fire call 911"
Image source – https://www.flickr.com/photos/laurablume/5356203877 (CC BY ND 2.0)

Last Wednesday’s attempted insurrection at the US Capitol left many in various states of shock, despair, anger, and grief. As the fallout from the attempt continues to unfold, we are starting to learn more about the possible cybersecurity breaches that resulted from the attempt. Cybersecurity professionals, who are still trying to investigate the extent of the damage done by the SolarWinds attack weeks before, are now trying to piece together what could have been compromised when the mob entered the building. Stolen laptops and other mobile devices, unlocked desktop computers, paperwork left on desks – the immediate evacuation of congresspeople and workers meant that the mob had potential access to sensitive or confidential information as well as sensitive internal systems.            

Leaving a desk, office, or service point immediately to get to safety is a real possibility, even for libraries. Active shooter training has become standard for many US organizations, joining common fire and severe weather drills where staff leave their workstations to head to safe areas. Other library workers have personal experience leaving their work station to get to safety; in one instance, someone I knew barricaded themselves in a work office with other library staff after a patron started attacking them at the information desk. Physical safety comes first. Nonetheless, this leaves information security and privacy professionals planning on how to mitigate the risk that comes with potential data and security breaches in these life-threatening emergencies.

Incident response planning and several cybersecurity strategies help mitigate risk during emergencies where staff immediately leave work areas. Preventative measures can include:

  • Encrypting hard drives on computers and mobile devices
  • Requiring multifactor authentication (MFA) for device and application access
  • Installing remote wipe software to wipe devices if they are reported missing or stolen
  • Not writing down passwords and posting them on computer monitors, keyboards, desks, etc.
  • Conducting an inventory of library staff computers and mobile devices (tablets, phones, etc.)
  • Setting up auto-lock or auto-logoff on staff computers after a few minutes of inactivity
  • Storing confidential or sensitive data in designated secured network storage and not on local hard drives or USB drives
  • Limit access to systems, applications, and data through user-based roles, providing the lowest level of access needed for the user to perform their daily work
  • Storing mobile devices and drives as well as sensitive paper documents in secured areas when not in use (such as a locked desk drawer or cabinet)

After the emergency, an incident response plan guides the process in responding to potential data breaches: containing the damage, removing the attacker from doing more damage, and how to repair the damage. The incident response plan also provides communication plans for users affected in the breach as well as any regulatory obligations for reporting to a government office or official.

All of this will involve a considerable amount of resources and time; however, the time spent in planning and in training (think the tabletop exercises mentioned in our post about gaming in cybersecurity training) will be less time spent after the fact where emotions and stress are running high, resulting in things being missed or falling through the cracks after the emergency.

A Quick Data Privacy Check-in for The New Year

A small orange and white kitten sits on an Apple floppy drive, while a picture of a gray cat is displayed on an Apple monitor.
Image source: https://www.flickr.com/photos/50946938@N03/5957820087/ (CC BY 2.0)

Welcome to 2021! We hope that everyone had a restful holiday break. There might be some changes to your work environment for the new year that could affect the privacy and security of your patrons’ data. Let’s start this year off with a quick (and gentle) check-in.

Smart devices

Smartwatches, smart speakers, smart TVs – what new internet-enabled smart device has taken residence in your home, office, or even on your person? You might not know that these devices eavesdrop on your conversations and, in some instances, eavesdrop on what you type. If you are working with a patron or talking with a colleague that includes patron information, what smart devices are in listening range that weren’t before the new year?

Depending on the device, you might be able to prevent eavesdropping; however, other devices might not have this option. Disconnecting the internet from the device is also an option, but this might be more of a hassle than a help. The one sure way to stop a device from eavesdropping is to remove it from listening range, or, better yet, disconnecting the device from its power source.

Computers and mobile devices

A new year could mean a new computer or mobile device. If this is you, and if you are using a personal computer or mobile device for working with patrons or patron data, don’t forget to do the following while setting up your new device:

  • Install antivirus software (depending on your organization, you might have access to free or discounted software)
  • Install the VPN client provided by your organization
  • Install privacy-preserving tools and browser extensions
  • Enable auto-updates for the operating system and any applications installed on the device
  • Review the privacy and security settings for your operating system:
    • Mac and iOS devices – Apple recently published a document listing security and privacy settings on all Apple devices. The tl;dr summary by Lifehacker is a good resource if you’re not sure where to begin
    • Android – Computerworld’s guide to Android privacy is long but worthwhile if you want a list of actions to take based on the level of privacy you want on your device. Also, visit Google’s Data Privacy Settings and Controls page to change your Google account privacy settings (because now is a good time as any to review Google settings).

Evergreen recommendations

Even if you didn’t get a new smart device or computer for the holidays, here are a few actions you can do with any device to start the new year right by protecting your and your patrons’ privacy:

Take a few moments this week to review privacy settings and risks – a moment of prevention can prevent a privacy breach down the road.

Patron Privacy Support: Holiday Edition

An orange cat looking at a laptop screen and pawing a mouse tracking pad.
Image source: https://www.flickr.com/photos/25473210@N00/421211549 (CC BY 2.0)

Black Friday and Cyber Monday have come and gone, but there are still plenty of opportunities to buy the last-minute gift to mark the end of a rough year. Patrons who might have gone to the library to ask for help setting up their new tech gadget will still find their way to the library help desk via chat, email, or phone. Other patrons might come to the help desk with questions from researching which tech gadgets to gift to others (or to themselves!). Why not use this time to do a bit of privacy instruction?

For patrons wondering what to buy – Mozilla’s *privacy not included is an excellent starting point for researching tech gifts that connect to the internet. The guide contains information about data privacy and security for each product and even warns you if a particular product doesn’t meet a minimum security standard.

For patrons who are shopping online – Even though most of our lives have shifted to online thanks to the pandemic, patrons might not have online safety and privacy in mind while shopping online. Account privacy settings, passwords, credit cards, web tracking, digital fingerprinting, phishing emails – the list of vulnerabilities and threats goes on and on. Having a sense of the patron’s threat model will help you determine which guides and resources you can use to help the patron protect their privacy while online. The Virtual Privacy Lab from the San Jose Public Library gives patrons a customizable privacy toolkit they can then use to protect their online privacy and security. You can also send along this short newsletter from SANS about secure online shopping that will help patrons to protect themselves while they shop online.

For patrons setting up their new tech gadget – The patron is excited about their new tech gadget! That is until they can’t figure out how to set it up. This is a great place to introduce privacy-preserving practices found in the Data Detox Kit and in other resources on the Choose Privacy Every Day site to set up their devices to protect their privacy and security right when they start using the gadget.

Last, an evergreen reminderdo not buy or gift an Amazon Ring.

No matter the gadget question or help request this holiday season, there’s always an opportunity to give the gift of privacy to patrons through sharing ways to help them protect their data. While this year might prove a challenge to provide the same level of support at the information or help desk, the above online resources make meeting that challenge a little easier for both the patron and for library staff. Happy shopping and tech support-ing!

FYI – New Newsletter Privacy Policy

Today (as in an hour before publishing our post!) MailPoet announced that it has been acquired by WooCommerce. LDH uses MailPoet for our weekly newsletter mailings. We will be reviewing the new Privacy Policy for the app to decide if we should continue to use the app. While we do not currently use any of the analytics features on MailPoet, we will need to determine if this acquisition means a change in data collection and processing with the third-party vendor. LDH will announce any changes to the newsletter app or other updates in a future post. If you have any questions in the meantime, please feel free to email us.

Security Without Privacy

Powerpoint slide listing the types of data collected by typical web app logs, including timestamps, user behavior, biometric data, and geographic location.
Slide from the SNSI October Webinar

Academic libraries have been in the information security spotlight due to the resurgence of Silent Librarian. The collection of academic user accounts gives attackers access to whatever the user has access to in the campus network, including personal data. Attackers gaining access to library patron data was not the reason why academic library information security was in the news again this past month, however.

Protecting The Bottom Line

In late October, the Scholarly Networks Security Initiative (SNSI) presented a webinar [slides, transcript] that made several controversial statements and proposals. The one that caught the attention of the academic researcher and library worlds is the proposal of a publisher proxy tool to monitor user access and use of publisher resources. In the transcript and slides, the proposal included tracking behavioral data in addition to other personally identifiable data. For example, the publisher would actively track the subjects of the articles that the user is searching and reading:

159

00:29:10.020 –> 00:29:17.280

Corey Roach: You can also move over to behavioral stuff. So it could be, you know, why is a pharmacy major suddenly looking up a lot of material on astrophysics or

160

00:29:18.300 –> 00:29:27.000

Corey Roach: Why is a medical professional and a hospital suddenly interested in internal combustion things that just don’t line up and we can identify fishy behavior.

While there are other points of contention in the presentation (we recommend reading the transcript and the slides, as well as the articles linked above), the publisher proxy tool brings up a perennial concern around information security practices that libraries need to be aware of when working with IT and publishers.

You Say Security, But What About Privacy?

Security and privacy are not one-to-one equivalents. We covered the differences in security and privacy in a previous post. Privacy focuses on the collection and processing of personal data while security focuses on protecting organizational assets that may include personal data. Privacy is impossible without security. Privacy relies on security to control access and use of personal data. However, there is the misconception that security guarantees privacy. Security is “do one thing and do it well” – protect whatever it’s told to protect. Security does not deal with the “why” in data collection and processing. It does the job, no questions asked.

When security measures like the proxy tool above are touted to protect publisher assets, the question of “why this data collection and tracking” gets lost in the conversation. Libraries, in part, also collect behavioral data through their proxies to control access to library resources. Even though this data collection by libraries is problematic in itself, the fact remains that the data in this proxy is collected by the library and is subject to library policy and legal regulations around library patron data. The same information collected by a vendor tool may not be subject to the same policies and regulations – outside of California and Missouri, there are no state laws specifically regulating vendor collection, processing, and disclosure of library patron data. Therefore, any data collected by the vendors are only subject to whatever was negotiated in the contract and the vendor privacy policies, both of which most likely allow for extensive collection, processing, and disclosure of patron data. Security that uses patron data doesn’t necessarily guarantee patron privacy and could even put patron privacy in jeopardy.

Bringing Privacy into Library InfoSec

Academic libraries are part of a campus system and are one of many ways an attacker can gain access to campus assets, including personal data, as demonstrated by Silent Librarian. However, academic libraries are also targets for increased surveillance in the name of information security, as illustrated by the SNSI presentation. The narrative of “academic library as the weak link in a campus network” can force libraries into a situation where patron privacy and professional ethics are both compromised.  This is particularly true if this narrative is driven by information security professionals not well acquainted with privacy and data ethics or by vendors who might financially benefit from the data collected by this increased surveillance of library patrons.

Library organizations and groups are weighing in on how information security should consider library privacy and data ethics. This Tuesday, ALA will be hosting a Town Hall meeting about surveillance in academic libraries. DLF’s Privacy and Ethics in Technology Working Group and the Library Freedom Project, co-collaborators with ALA’s Town Hall event, will most likely add to the conversation in the coming weeks with resources and statements. We’ll keep you updated as the conversation continues!

In the meantime…

A small postscript to the blog post – one reoccurring theme that we come across when talking to libraries about privacy is the importance of relationships with others in and outside the library. These relationships are key in creating buy-in for privacy practices as well as creating strong privacy advocates in the organization. What type of relationship do you have with your organizational information security folks? Check out this short presentation about building organizational relationships to promote a strong privacy and security culture if you are still wondering where to start.

The Threat Within

A headshot of Chadwick Jason Seagraves with text overlay: 'Anonymous Comrades Collective - Doxer Gets Doxed: "Proud Boy" Chadwick Jason Seagraves of NCSU'

People sometimes ask what keeps privacy professionals up at night. What is that one “worst-case scenario” that we dread? Personally, one of the scenarios hanging over my head is insider threat – when a library employee, vendor, or another person who has access to patron data uses that data to harm patrons. A staff person collecting patron addresses, birthdays, and names to steal the patrons’ identities is an example of insider threat. Another example is a staff person accessing another staff’s patron records to obtain personal information to harass or stalk the staff member.

Last week, an IT employee at NCSU was doxed as a local leader of a white supremacist group. This person, who worked IT for the libraries in the past, doxed individuals, including students in his own university, to harass and, in some cases, incite violence toward the people being doxed. As an IT employee, this person most likely had unchecked access to students, staff, and faculty personal information. It wouldn’t be a stretch to say that he still had access to patron information, given his connections to the library and his IT staff position.

Libraries spend a lot of time and attention worrying about external threats to patron privacy: vendors, law enforcement, even other patrons. We forget that sometimes the greatest threat to patron privacy works at the library. Library workers who have access to patron data – staff, administration, board members, volunteers – can exploit patrons through the use of their data for financial gain in the case of identity theft or harm them through searching for specific library activity, checkouts of certain materials, or even names or other demographic information with the intent to harass or assault. The reality is that there might not be many barriers, if at all, to stop library workers from doing so.

The good news is that there are ways to mitigate insider threat in the library, but the library must be proactive in implementing these strategies for them to be the most effective:

Practice data minimization – only collect, use, and retain data that is necessary for business operations. If you don’t collect it, it can’t be used by others with the intent to harm others.

Implement the Principle of Least Privilege – who has access to what data and where? Use roles and other access management tools to provide staff (and applications!) access to only the data that is absolutely needed to perform their intended duty or function.

Regularly review internal access to patron data ­­– set up a scheduled review of who has what access to patron data. When an employee or other library worker/affiliate changes roles in the organization or leaves the library, develop and implement policies and procedures in revoking or changing access to patron data at the time of the role change or departure.

Confidentiality Agreements For Library Staff, Volunteers, and Affiliates – your privacy and confidentiality policy should make it clear to staff that patrons have the right to privacy and confidentiality while using library resources and services. Some libraries go further in ensuring patron privacy by using confidentiality agreements. These confidentiality agreements state the times when patron data can be access and the acceptable uses for patron data. Violation of the agreement can lead to immediate termination of employment. Here are some examples of confidentiality agreements to start your drafting process:

Regularly train and discuss about privacy  – ensure that everyone who is involved with the library – staff, volunteers, board members, anyone that might potentially access patron data as part of their role with the library – is up to date on current patron privacy and confidentiality policies and procedures. This is also an opportunity to include training scenarios that involve insider threat to generate discussion and awareness of this threat to patron privacy.

A note about IT staff, be it internal library IT staff or an external IT department (campus IT, city government IT, or another form of organizational IT) – Do not automatically assume that IT staff are following privacy/security standards and policy just because they are IT. Now is the time to discuss with your IT connections about their current access is and what is the minimum they need for daily operations. However, even if the IT department practices good security and privacy hygiene (such as making sure they follow the Principle of Least Privilege), any IT staff member who works with the library in any capacity must also sign a confidentiality agreement and be included in training sessions at the very minimum.

A data inventory is a good place to start if you are not sure who has access to what data in the library. The PLP Data Privacy Best Practices for Libraries project has several templates and resources to help with creating a data inventory, assessing privacy risks, and practical actions libraries can take in reducing the risk of an insider threat.

Libraries serve everyone. We serve patrons who are already at high risk for harassment and violence. Libraries must do their part in mitigating the risk that insider threat creates for our patrons who depend on the library for resources and support. Otherwise, we become one more threat to our patrons’ privacy and potentially their lives or the lives of their loved ones.

NaNoWriMo: Data Privacy Edition

A Siamese cat sitting in front of an open laptop computer.
‘Tis the season for all things writing. Your cat might have some opinions about that… Source: https://www.flickr.com/photos/cedwardmoran/4179761302/

Welcome to this week’s Tip of the Hat!

Today marks the second day of NaNoWriMo – National Novel Writing Month. For years many aspiring (and established) writers spend countless hours writing to reach the goal of a 50,000-word manuscript. If you do the math, you would have to write about 1700 words a day to reach the goal! Novels are the primary genre for NaNoWriMo, but that hasn’t stopped others from taking the idea of a writing month and using it for other genres. For example, this month is also AcWriMo, or Academic Writing Month, for academics who need to buckle down to write that research book or article.

With November being the month of writing, why not join in the fray with writing about data security and privacy? Our recent Cybersecurity Awareness Month posts discussed the importance of interactive and engaging training, so the question now is how you can build a data security and privacy training that won’t put staff to sleep, or worse, demotivate them from taking proactive privacy and security measures to protect patron data. One way to create engaging training is to use stories and scenarios. Drawing from real-world examples is a start, but the challenge is turning that example into a scenario where training participants are invested in addressing the problems presented in the story. Here are a few tips to help you with the writing process!

Characters – who are the major players in the scenario? Staff person, patron, vendor, random person who comes off the street, the cat who keeps sneaking into the library building? Once you have the characters, what roles do they play? What are their motivations? Why do they do the things they do or think the way they think?

So many questions, even for a short scenario! Take a page from User Experience (UX) and create personas to help with the character-building process. Even a shortlist of who they are, what motivates them, what they want, and what they know can help hone the scenario narrative as well as introduce common types of motivations, knowledge/skill levels, and different types of threat actors or people that might face additional privacy risks to training attendees. 

If you need more inspiration for characters, may I introduce you to Alice and Bob and their crypto-friends?

Story – Your real-world examples or the case studies you learn from others are two good places to start. That shouldn’t stop you from exploring building scenarios from scratch! Or perhaps you would like to modify the real-world examples into a scenario that would be a better fit for the training you’re developing. One concept to explore for your scenario is threat modeling, or identifying potential weaknesses at the library (systems, procedures, policies, etc.), who or what might take advantage of the weakness, and what can be done to either avoid or mitigate the threat. The threat modeling process can uncover a complex web of threats and vulnerabilities that interact with each other. On the other hand, it could lead to valuable conversations with trainees about how one vulnerability can create a ripple effect if exploited, or how a threat actor isn’t always acting with malicious intent. Sometimes the most dangerous threat actors are not aware that they are putting data privacy at risk such as a staff person with good intentions sharing patron data without knowledge of patron privacy procedures. 

Visual aids – What’s a story without visual aids? You might not have the resources or acting chops to create scenario videos, but there are always pictures to give life to your characters and scenarios. Luckily, there are several Creative Commons licensed resources to choose from:

You can also search for CC-licensed photos on Flickr and Creative Commons.

There are a lot more you can do with building scenarios for your data privacy and security trainings, but these three areas will hopefully get you started down the path of becoming an accomplished author… of training scenarios 😉 Enjoy your writing journey, and good luck!