Turning Acknowledgment into Action

Several people putting up a net banner with an orange outline of Chief Seattle's face and text underneath the face - "Chief Seattle is Watching"
Image source: https://www.flickr.com/photos/backbone_campaign/21483972929/ (CC BY 2.0)

We’re going to start the post with a quick exercise. Where do you live and work? Easy enough, right? Some of you probably can name a street, neighborhood, town, city, or state off the top of your head.

Let’s take the first question and change a couple of words – whose land do you live and work on?

Some of you might already know whose land that you live and work on. For those who do not, you can visit https://native-land.ca/ to find more information about the indigenous lands you currently occupy.

As we wrap up  Native American Heritage Month this week, we are taking some time to give some context around the land acknowledgment included in our recent talks. You can use the resources at the end of the post for your acknowledgments that go beyond a statement of whose land you’re on.

Acknowledgment as The First Step

LDH lives and works on the unceded, traditional land of the Duwamish People, the first people of Seattle.

The above-italicized sentence is the start of the land acknowledgment in recent LDH talks. Many of us have encountered similar statements in various events and presentations. Land (or territory) acknowledgments sometimes stop here, naming the peoples whose land we’re on. However, this approach lacks the full acknowledgment of how the land became occupied. It also doesn’t acknowledge the present-day impact this occupation has on the people.

The Duwamish Tribe was the first signatories on the Treaty of Point Elliott in 1855. The Tribe has been denied the rights established in the treaty for over 165 years. The United States Federal Government currently does not recognize the Duwamish Tribe, denying the Tribe the rights and protections of federal recognition.

Naming the treaty is important in giving the historical context around the occupation of the land, but equally important is the explicit statement that the treaty has still to be honored by the federal government. The Duwamish Tribe is not federally recognized, which is important to acknowledge because of its historical impact on the Tribe and its current impact on the Tribe’s rights to funding for and access to housing, social services, and education, among other resources and services.

The Duwamish People are still here, continuing to honor and bring to light their ancient heritage.

Indigenous people are still here. It’s easy to leave the land acknowledgment to acknowledge the past and not venture into the present. But an acknowledgment of the present has to go beyond education and head into action.

Calls to Action

A portion of the speaker’s fee from the conference will be donated to Real Rent Duwamish. Real Rent serves as a way for people occupying this land to provide financial compensation to the Tribe for use of their land and resources – https://www.realrentduwamish.org/

The Tribe has started a petition to send to our state congresspeople to create and support a bill in Congress that would grant the Tribe federal recognition. The link to the petition is on the slide – https://www.standwiththeduwamish.org/

You are welcome to join me in donating to Real Rent or signing the petition.

The second half of the acknowledgment are two specific calls to action. Each action provides the opportunity for event attendees to support or advocate for the Duwamish People whose land LDH occupies. Real Rent Duwamish provides financial support and resources for the Tribe through a voluntary land tax. The petition aims to gather support for a bill granting the Tribe federal recognition, giving the Tribe access to services and resources available to other treaty tribes. If attendees cannot financially donate to Real Rent, they can provide non-financial support through the petition.

LDH’s acknowledgment focuses on calls to action around solidarity with the Duwamish People. Other land acknowledgments make the additional call for event attendees to research whose lands they occupy through https://native-land.ca/. Clicking on a specific territory will provide a page with resources where attendees can learn more about the Indigenous people whose land they’re on. For example, the Duwamish Tribe page on the site also links to ways to support the Tribe. Other calls to action found in land acknowledgments include supporting water protectors, such as supporting water protectors in stopping Line 3.

Resources

The list below is some resources you can use to inform not only yourself and others about the land you occupy but also what you and others can do to be in solidarity with Indigenous people in your acknowledgments and beyond.

Libraries (and Archives) as Information Fiduciaries? Part Three

A collection of football tickets and postcard invitations in a clear archival sleeve.
Image source: https://flickr.com/photos/27892629@N04/15959524202/ (CC BY 2.0)

Welcome back to the third installment of the information fiduciaries and libraries series! It’s been a while since we explored the concept of libraries acting as a trusted party managing patron personal data. Thanks to Tessa Walsh’s recent demo of Bulk Reviewer, we got the nudge we needed to tackle part three of the series. You can catch up on Parts One and Two if you need a refresher on the subject.

Managing Personal Data in a Collection

We left off the series with the question about what happens to a library’s information fiduciary role when the personal data is entrusted with is part of the collection. The relationship between the personal data in the collection, the person, and the library or archive is not as straightforward as the relationship between the library and the patron generating data from their use of the library. Personal papers and collections donated to archives contain different types of personal data, from financial and medical to personal secrets. What happens in the case where a third party donates these papers containing highly personal information about another person to a library or archive? In the case of a person donating their documents, what happens when they have personal data of another person who may not have consented to have this data included in this donation? Moving from the archive to the institutional repository, what happens when a researcher submits research data that contain identifiable personal data as part of a data set, be it a spreadsheet that includes Social Security Numbers or oral histories containing highly personal information to a living person?

As you probably already guessed, these complications are only the start of the fiduciary responsibilities of libraries and archives surrounding these types of personal data. We’ve covered redacting PII from digital collections in the past, but redaction of personal data to protect the privacy of the people behind that data only addresses a small part of how libraries and archives can fulfill their information fiduciary role. Managing personal data in collections requires managing data in the best interests of the library/archive and the person donating the materials and the best interests of the people behind the personal data included in that donated material, which may not be the same person as the donor.

Thankfully, we don’t have to navigate this complex web of relationships to determine how to manage the collection with the best interest of the people behind the data. The Society of American Archivist’s Privacy & Confidentiality Section can help libraries and archives manage personal data in their collections. If you are looking for documentation around privacy in archives, check out the documentation portal. Have too many types of personal data to know where to start? The section’s bibliography can lead you to the right resources for each major type of personal information you have in your collection. Perhaps you want to know more about current issues and concerns around personal data in collections. The RESTRICTED blog has you covered, alongside webinars such as Tessa’s demo of Bulk Reviewer mentioned at the start of this post. We highly recommend checking out the mini-blog series from Heather Briston, following up on her webinar “It’s Not as Bad as You Think – Navigating Privacy and Confidentiality Issues in Archival Collections.”

Beyond the section, you also might find the following publications helpful in determining how your library or archive should fulfill their responsibilities to the people behind the data in your collections:

  • Botnick, Julie. “Archival Consent.” InterActions: UCLA Journal of Education and Information Studies 14, no. 2 (2018). https://doi.org/10.5070/D4142038539.
  • Mhaidli, Abraham, Libby Hemphill, Florian Schaub, Cundiff Jordan, and Andrea K. Thomer. “Privacy Impact Assessments for Digital Repositories.” International Journal of Digital Curation 15, no. 1 (December 30, 2020): 5. https://doi.org/10.2218/ijdc.v15i1.692.

This is only a small selection of what’s available, but the Privacy & Confidentiality Section’s resources are an excellent place to start to untangle the complex web of determining what is in the best interest of all parties involved in managing the personal data in your collections.

Before we end our post, there is one question that a few of our readers might have – can archivists guarantee the same level of confidentiality as lawyers or doctors can in protecting personal information in legal matters?

A Question of Archival Privilege

Some of our readers might remember discussions about archival privilege in the early 2010s stemming from the litigation surrounding the Belfast Project oral histories. Archival privilege is not legally recognized despite legal arguments for such a privilege or tying such a privilege to researcher privilege in court (such as in Wilkinson v. FBI and Burka v. HHS). These rulings mean that materials in a collection are subject to search via subpoenas and warrants, which leads to privacy harms to those whose personal data is included in those collections. Nevertheless, it’s still worthwhile to revisit the calls for such a privilege and discussions of what archival privilege would look like:

Even though Boston College successfully appealed the initial order to hand over all the records listed in the subpoena, we are still left with whether the archives profession should push for privileged relationships between donors or other individuals represented in the collections and the archives. We will leave discussion of if such a privilege should exist (and in what form) to our readers.

Just Published – Licensing Privacy Vendor Contract and Policy Rubric (Plus Bonus Webinar!)

Happy National Spicy Hermit Cookie Day! Today is your day if you need an excuse to make a batch of cookies to prepare for the baking rush in a few weeks. While the term “hermit” refers to the cookie’s ability to keep for months, we at LDH are not exactly sure if we can call a cookie a literal hermit. Nevertheless, we know what can make someone into a hermit – spending countless hours reading vendor contracts.

(We would like to apologize for that transition. Here is a picture of a tray of freshly baked cookies to make up for it.)

The lucky academic library people who deal with content platform vendor contracts know all too well the frustrations with these contracts, particularly around data privacy and security. Contracts are notorious for being obtuse and dense, but an added complication with content platform contracts is the limited and vague language around our patrons’ data – what data is collected, why the vendor is collecting it, how they’re collecting patron data and sharing it to other third parties, what data rights patrons have, and so on. The complications don’t stop there. Academic library workers not only have to negotiate data privacy with the vendor, but more often than not, they find themselves internally negotiating for privacy at an institutional level, advocating and educating institutional peers about patron privacy rights and needs. Protecting patron privacy shouldn’t be this hard, but this is the reality that many academic library workers face in the contract evaluation and negotiation processes.

The Licensing Privacy Project is here to help. The Mellon Foundation-funded project just published the Vendor Contract and Policy Rubric to streamline the evaluation and negotiation processes for content vendor contracts and policies. Academic library workers can use the rubric to evaluate contracts for potential data privacy and security issues in eight key privacy domains, including data collection and user surveillance. The rubric brings together several well-known library privacy standards and practices to streamline the evaluation process, noting which vendor privacy practices could meet those standards and which to flag for further evaluation and negotiation. The supplementary glossary and example contract language resources provide definitions for common privacy terms and what type of contract language to look out for in specific privacy domains. The interactive features of the rubric allow for sharing evaluation notes, identified privacy risks, and ways to mitigate those risks within the library and institutional staff who are part of the negotiation process.

If you want to learn more about the rubric and how you can use it at your academic library, make sure to sign up for the webinar this Wednesday (11/17) at 1 pm Central Standard Time. Not only will you learn more about the rubric, but you will also get a chance to talk to other colleagues in brainstorming all the possible ways this rubric can help you advocate for patron privacy during the contract negotiation process. If you can’t make it, don’t worry – the webinar will be recorded. We hope to see you there!

Don’t Forget About Privacy While Turning Back The Clock

Last weekend was when we finally got our one hour back (for those of us still observing Daylight Savings Time [DST] in the US). Instead of sleeping in, though, we are barraged with public service announcements and reminders to spend that hour taking care of things that otherwise get ignored. That fire alarm battery isn’t going to change itself! Like #DataSpringCleaning, the end of DST is a great opportunity to take care of privacy-related things that we’ve been putting off since spring.

What are some things you can do with the reclaimed hour from DST?

  • Choose and sign up for a password manager – If you’re still on the fence about choosing a password manager, check out our post about the basics of selecting a manager. Once you get past the inertia of selecting a password manager, switching to a password manager becomes a smoother process. Instead of switching all your accounts to the password manager at once, you can enter the account information into the manager when you sign into that specific account. Using the password manager’s password generator, you can also use that time to change the password to a stronger password. And while you’re logged in…
  • Set up multifactor authentication (MFA) – You should really turn on MFA if you haven’t already done so for your accounts. Use a security key (like a YubiKey) or an authenticator app for MFA if possible; nevertheless, the less secure versions of MFA – SMS and email – are better than no MFA. Read about MFA on the blog if you’re curious to learn more about MFA.
  • Review privacy and security settings for social media accounts – Social media sites are constantly adding and changing features. It’s good to get into the habit of checking your social media account settings to make sure that your privacy and security settings are where you want them to be. Another thing you might want to check is how much of your data is being shared with advertisers. Sites like Facebook and Twitter have account setting sections dedicated to how they use your data to generate targeted ads.

Your library also has a reclaimed hour from DST. What can you do at work with that reclaimed hour?

  • Review the privacy policy – It never hurts to review the privacy policy. Ideally, the privacy policy should be updated regularly, but sometimes even having a review schedule in place doesn’t necessarily guarantee that the review actually gets done. If the policy missed its regularly scheduled review, it might be worthwhile to push for the overdue review of the policy to ensure the policy’s alignment with current professional standards, codes, and legal regulations.
  • Check your department or team procedures against the privacy policy – Your department work procedures change regularly for various reasons, such as changes in technology or personnel. These changes might take these procedures out of alignment with the current privacy policy. Relatedly, an update to the privacy policy might need to be reflected in changes to the procedure. Review the two sets of documents – if they’re not in alignment, it’s time to set up a more formal document review with the rest of the department. Now is also an excellent time to set up a schedule for reviewing procedures against the privacy policy (as well as privacy-adjacent policies) on a regular basis if such a schedule doesn’t already exist.
  • Shred paper! – Take time to look around your workspace for all the pieces of paper that have sensitive or patron data. Do you need that piece of paper anymore? If not, off to the office shredder it goes. Grab a coffee or a treat on your way back from the shredder while you’re at it – you earned it ☕🍫

We won’t judge you if you ultimately decide to spend your reclaimed hour sleeping in (or changing that fire alarm battery). Nevertheless, making a habit of regularly checking in with your privacy practices can save you both time and trouble down the road.

It’s Dangerous to Go Alone

A cross stitch of a pixelated which old man with a white beard flanked by two pixelated fires. A pixelated sword lies in front of the old man. Text in white above the scene "It's dangerous to go alone. Take This."
Image source: https://www.flickr.com/photos/12508267@N00/31229743046/ (CC BY 2.0)

Juan saw his recent promotion to Director of Access Services at Nebo University Libraries as an opportunity to change his library’s approach to patron privacy. However, Juan knew that becoming a manager of one of the largest departments in the libraries would not altogether remove the roadblocks he kept running into when he advocated for more robust privacy policies and practices as a staff member. Juan now had to figure out how to use his new position to advocate for the privacy changes he had been pushing for a long time…

Juan was one of the four fictional library workers introduced to participants in a recent library privacy workshop. Unlike the other three library workers, Juan was in a unique position. Instead of addressing privacy concerns with other academic departments or campus members, Juan focused on the library itself. When he was still staff, Juan had some limited success in getting better privacy protections at the library. Like many others, Juan ran into organizational roadblocks when changing privacy practices on a larger scale. Newly promoted and with new administrative political capital in the library, Juan thinks he’s in a better position to push for privacy changes throughout the entire library system.

However, Juan is not considering one essential thing – it takes much more than one person in a library to create a sustainable culture of privacy. Many of us have been in the same situation as Juan in going out on our own and pushing for privacy changes in our libraries. We do this on top of everything else that we are responsible for in our daily duties. Sometimes we rationalize this additional workload by bending and stretching existing job responsibilities without formally accommodating the new responsibilities. Other times, we deem privacy work so important that we are willing to sacrifice a portion of our well-being to ensure our patrons are protected (hello Vocational Awe). This might gain us a couple of small wins in the short term: a change in a departmental procedure or reducing the amount of data collected by a patron-facing application or system. However, the long-term reality is that these changes are not set up to be maintained because there is no sustainable system in place. Unless, of course, we as individuals decide to take on that maintenance – but even then, one person can only take on so much on top of their existing workload before everything starts to fall apart.

Creating sustainable privacy practices and programs in organizations requires at minimum two things: dedicated resources and dedicated people. Most libraries do not have these things, relying on existing staff and resources to make privacy happen. While libraries have historically been able to operate with this organizational kludge, changes to library operations and services in the last few decades have made this kludge not only ineffective but dangerous to both patrons and the library as an organization with regard to privacy risk and potential harms if those risks are realized. It is nearly impossible for patrons not to generate data in their library use, be it physical or online. Because so much of this generated data is collected by the library and third parties, even the routine act of trying to document the lifecycle of this data can be a monumental task if there is no dedicated structure in place for this work to be done sustainably.

Like many of us, Juan wants to protect patron privacy. Nevertheless, if he tries to go it alone and does not build the infrastructure to sustain privacy practices, his efforts will be short-lived at best. Privacy policies and procedures are part of that infrastructure, but they’re a part of the infrastructure that is dependent on the dedicated staff time and resources that are critical for sustainable practices. What are some of Juan’s options?

  • Create a centralized library data governance committee – Juan can’t do this work alone, particularly when his primary job responsibilities don’t include overseeing the library’s privacy practices. Creating a data governance committee would bring in both administration and staff from different areas of the library that work or use patron data to oversee data management, including data privacy and security. This committee would not only create and review privacy policies and procedures but would also serve as an accountability mechanism for when things go wrong or to ensure things get done. No one library worker would be solely responsible for the library’s privacy practices in this option, though Juan would need to ensure that participation in the committee does not become an undue burden for staff.
  • Advocate for a dedicated budget line for data privacy and security – There might already be data privacy and security resources available at the university, but those resources might not cover library-specific needs such as professional development for privacy training, consulting, or auditing. Some departments in the library might already have a dedicated budget line for privacy and security, such as Library Systems. Juan might want to talk to the department managers to determine if there might be a chance to collaborate in increasing funds to help fund data privacy and security activities in the library.
  • Advocate for a dedicated privacy staff position in the library – Even with a library data governance committee, ultimately, someone has to wrangle privacy at the library. Juan’s role might include some oversight of some privacy practices in Access Services; unless his job description changes, he cannot be the privacy point person for the entire library. Having a dedicated point person for privacy at the library would ensure that the data governance committee is kept on track in terms of being the data steward for the group. More importantly, it would also ensure that at least one person in the library has dedicated time and resources to track, manage, and address new and evolving data privacy risks and harms patrons face while using the library. While a full-time dedicated position to privacy is ideal, the budget might not support a new position at the time of the request. In that case, Juan might argue that he could be the privacy point person under the condition that he can shift his current responsibilities to other managers in Access Services. Nevertheless, Juan’s suggestion should only be a short-term workaround while the library works to find funding for a full-time privacy position.

All three options require some form of collaboration and negotiation with the administration and staff. Juan cannot realistically create these structures alone if he wants these structures to survive. It comes back to creating and maintaining relationships in the organization. Without these relationships, Juan is left on his own to push for privacy, which inevitably leads to burnout. No matter how passionate we are about patron privacy, like Juan, we must realize that we must not do our privacy work alone if we want our efforts to succeed.

FUD and Reality – Information Security and Open Source Software

A black cat and a grey tabby cat sit on top of a gray computer monitor. The top border of the monitor has a black and white sticker with the text "I <3 source code."
Image source: https://www.flickr.com/photos/miz_curse_10/1404420256/ (CC BY SA 2.0)

Librarians like our acronyms, but we’re not the only profession to indulge in linguistic gymnastics. The technology field is awash in acronyms: HTTP, AWS, UI, LAN, I/O, etc. etc. etc. One acronym you might know from working in libraries, though, is OSS – Open Source Software.

Library technology is no stranger to OSS. The archived FOSS4LIB site lists hundreds of free and open source library applications and systems ranging from integrated library systems and content management systems to metadata editing tools and catalogs. Many libraries use OSS not specific to libraries – a typical example is installing Firefox and Libre Office on public computers. Linux and its multitude of distributions ensure that many library servers and computers run smoothly.

It’s inevitable, though, that when we talk about OSS, we run into another acronym – FUD, or Fear, Uncertainty, and Doubt. FUD is commonly used to create a negative picture of the target in question, usually at the gain of the person making the FUD. In the technology world, OSS often is depicted by proprietary software companies as being inferior to proprietary software – the Microsoft section in the FUD Wikipedia page gives several good examples of such FUD pieces.

It should be no surprise that FUD exists in the library world as well. One example comes from a proprietary software company specializing in library management systems (LMS). We’ll link to an archived version of the page if the page is taken down soon after this post is published; if nothing else, companies do not like being called out on their marketing FUD. The article poses as an article talking about the disadvantages of an LMS. In particular the company claims that OS LMSes are not secure: they can be easily breached or infected by a computer virus, or you can even lose all your data! The only solution to addressing all these disadvantages is to have the proprietary software company handle all of these disadvantages for you!

The article is a classic example of OSS FUD – the use of tactics to sow fear, hesitation, or doubt without providing a reasoned and well-supported argument about the claims made in the article. However, this is probably not the first time you ran into the idea that OSS is insecure. A talking point about OSS insecurity is OSS security bugs stay unaddressed in the software for years. For example, the Heatbleed bug that caused so much havoc in 2014 was introduced into the OpenSSL code in 2012, resulting in a two-year gap where bad actors could exploit the vulnerability. You’ve also probably run into various versions of the thinking around OSS security that Bruce Schneier describes below:

“Open source does sound like a security risk. Why would you want the bad guys to be able to look at the source code? They’ll figure out how it works. They’ll find flaws. They’ll — in extreme cases — sneak back-doors into the code when no one is looking.”

OSS is open for all to use, but it’s also available for all to exploit if you go down the path described in the above line of thinking.

The good news is that, despite the FUD, OSS is not more insecure than its proprietary counterparts. However, we also must be weary of the unchecked optimism in statements claiming that OSS is more secure than proprietary software. The reality is that OS and proprietary software are subject to many of the same information security risks mixed with the unique risks that come with each type of software. It’s not uncommon for a small OSS project to become dormant or abandoned, leaving the software vulnerable due to a lack of updates. Conversely, a business developing proprietary software might not prioritize security tests and fixes in its work, leaving their customers vulnerable if someone exploits a security bug. While there are differences between the two examples, both share the risk of threat actors exploiting unaddressed security bugs in the software.

OSS, therefore, should be assessed and audited like its proprietary counterparts for security (and privacy!) practices and risks. The nature of OSS requires some adjustment to the audit process to consider the differences between the two types of software. A security audit for OSS would, for example, take into account the health of the project: maintenance and update schedules, how active the community is, what previous security issues have been reported and fixed in the past, and so on. Looking at the dependencies of the OSS might uncover possible security risks if a dependency is from an OSS project that is no longer maintained. Addressing any security issues that might arise from an audit could take the form of working on and submitting a bug fix to the OSS project or finding a company that specializes in supporting OSS users that can address the issue. As we wrap up Cybersecurity Awareness Month in the runup to Halloween, let’s get our scares from scary movies and books and not from OSS FUD.

Cybersecurity Awareness Month News Update: School Cybersecurity, Passwords, and Crying “Hack!”

A small gray tabby kitten paws at the Mac laptop screen displaying the Google search home page, with its hind paws standing on the keyboard.
Image source: https://www.flickr.com/photos/tahini/5810915356/ (CC BY 2.0)

There’s never a dull moment in Cybersecurity Awareness Month, with last week being no exception. Here are some news stories you might have missed, along with possible implications and considerations for your library.

K-12 cybersecurity bill signed into law

You might remember reading about a new federal cybersecurity bill being signed into law. You remembered correctly! On October 8th, the K-12 Cybersecurity Act of 2021 was signed into law. The Act doesn’t have a set of standards to comply with for schools looking for such a list. Instead, the Act tasks the Cybersecurity and Infrastructure Security Agency (CISA) to study cybersecurity risks in K-12 educational institutions and what practices would best mitigate cybersecurity risks. The recommendations will be published along with a training toolkit for schools to use as a guide to implement these recommendations at their institution.

School libraries collect and store student data in several ways – the most common example being the patron record in the ILS. School libraries also heavily rely on third-party content providers, which in turn collect additional student data on both the library’s side and the third-party vendor’s side. School library workers, stay tuned for updates on the study and recommendations! While it’s unsure if the study will include school library systems and considerations into assessing cybersecurity risks, it’s more than likely that any recommendations that come from the study will affect school libraries.

Sharing all the passwords

You should be using a password manager. You might already be using one for your personal accounts, but are you using a password manager for work? If you’re still sharing passwords with your co-workers through spreadsheets or pieces of paper, it’s past time for your library to use a password manager. Several password managers, such as LastPass and Bitwarden, have business or enterprise products that are well-suited for managing passwords in the office. Not all password managers can share passwords and other sensitive information outside of the app, particularly if the other person doesn’t have an account with the same manager that you are using. There will be times where you want to share a password with someone outside your organization – a typical example is when a vendor needs to log into a system or app to troubleshoot an issue. But, for the most part, the password manager only supports secured sharing between people with accounts in the organization, so you’re stuck with sharing passwords in less secure ways.

However, if you are a 1Password user or your library uses 1Password’s business product, you no longer have this problem! 1Password users can now send account login information – including passwords – to anyone, including those who do not have a 1Password account. This new feature allows 1Password users to create a sharable link, with options to restrict sharing to specific people (email addresses) and when the link expires (anywhere between 30 days to after one person views the link)— no more calling the vendor, no more having staff email passwords in plaintext. Nonetheless, if your library wants to make use of this new feature, it’s best to give staff guidance as to how to create the links, including how to restrict access and expiration settings, along with training and documentation.

When a “hack” isn’t a hack

This news update is more of a “cybersecurity education 101” than news, considering the level of 🤦🏻‍♀️ this story contains. A very brief overview of what happened in Missouri last week:

  1. A reporter from the St. Louis Post-Dispatch found that a Department of Elementary and Secondary Education’s website contained the social security numbers (SSNs) of school teachers and administrators for the public to access through the HTML source code for the site.
  2. The newspaper notified the department about the security flaw, and the department took down the site in question.
  3. After the site was taken down, the newspaper published a story about the exposed SSNs on the now-defunct site.

Okay, so far, so good. Someone found a serious cybersecurity issue on a government website, reported it to the department, and waited to talk about the issue until the issue was addressed publicly. That’s pretty standard when it comes to disclosing security flaws. Let’s move on to the next item in the summary.

  1. The Governor of Missouri and other government officials responded to the disclosure, saying the reporter was a hacker and that the “hacker took the records of at least three educators, decoded the HTML source code, and viewed the social security number of those specific educators.”

🤦🏻‍♀️

There is a difference between hacking and exposing personal data on a publicly accessible website. The system was hacked if the reporter bypassed security measures to obtain sensitive data in an internal system, such as using stolen account logins to access the system. If the reporter clicks on the “View Source” menu option in their browser and finds sensitive data right in the source code of a publicly accessible website, you have a security vulnerability resulting in a data leak!

The takeaways from this story:

  1. Do not hard-code sensitive data in your code. This includes passwords for scripts that need to access other systems or databases.
  2. Review locally-developed and third-party applications that work with sensitive data for potential data leaks or other ways unauthorized people can improperly access the data.
  3. Do not punish the people who bring security issues to your attention! Like we discussed in our Friendly Phishing post, punitive actions can lead to a reduction in reporting, which increases security and privacy risks. Other reporters or private citizens who are watching the Governor take action against the reporter might be dissuaded from reporting additional data security or privacy issues to the state government, increasing the chance that these issues will be exploited by bad actors.
  4. If the data was sitting on a publicly available site for someone to access via F12 or View Source on their browser, it is not a hack. Let this be a lesson learned, lest you want to end up being ratio’ed like the Governor of Missouri on Twitter.

Information Security, Risk, and Getting in One’s Own Way

Maru demonstrating how certain information security measures can ultimately backfire and put the organization at risk if the measures add too many barriers for the user to go about their work. Source – https://twitter.com/rpteixeira/status/1176903814575796228

Let’s start this week’s Cybersecurity Awareness Month post with a phrase that will cause some of you to scream into the void and others to weep at your work desk quietly:

Admin privileges on work computers.

Rationing admin privileges on work computers is one example of an information security practice that both protects and puts data at risk. Limiting the worker’s ability to install a program on their work computer reduces the chances of the system falling to a cyberattack via malware. It also reduces the chances of critical operations or process failure if an app downloaded without IT support breaks after an OS update or patch. On the other hand, limiting admin privileges can motivate some workers to work around IT, particularly if IT has consistently denied requests for privileges or installing new tools or if the request process resembles something that only a Vogon would conceive of.  These workarounds put data at risk when staff work around IT to use third-party software with which the library has no contractual relationship or vendor security oversight. No contractual relationship + no evaluation of third-party privacy policies or practices = unprotected data.

IT is often their own worst enemy when it comes to information security. Staff don’t like barriers, particularly ones they see as arbitrary or prevent them from doing their jobs. Each information security policy or practice comes with a benefit and a cost in terms of risk. Sometimes these practices and standards have hidden costs that wipe out any benefit they offer. In the example of computer admin privileges, restrictions might lead workers to use personal computers or use third-party applications that the organization hasn’t vetted.  We have to calculate that risk with the benefit of reducing the chances of malware finding its way into the system.

The benefit-cost calculation comes back to the question of barriers, particularly what they are, how your policies and processes contribute to them, and the solutions or workarounds to navigate those barriers. Answering this question requires us to revisit the risk equation of calculating the cost or impact of a threat exploiting a vulnerability and how one can address the risk. By eliminating one risk through the barrier of disallowing admin privileges for staff computers, the organization accepts the multitude of risks that come with staff using personal devices or third-party applications or systems to work around the barrier.

Some barriers (for example, requiring authentication into a system that stores sensitive data) are necessary to reduce risk and secure data. The hard part comes in determining which barriers will not cost the organization more in the long run. In the case of admin privileges, we might consider the following options:

  • Creating two user accounts for each staff person: a regular account used for daily work and one local administrator account used only to install applications. The delineation of accounts mitigates the risk of malware infecting the local computer if the staff person follows the rules for when to use each account. The risk remains if the staff person uses the same password for both accounts or uses the admin account for daily work. Password managers can limit risks associated with reused passwords.
  • Creating a timely and user-friendly process for requesting and installing applications on work computers. This process has many potential barriers that might prevent staff from using the process, including:
    • long turnaround times for requests
    • lack of transparency with rejected requests (along with lack of alternatives that might work instead)
    • unclear or convoluted request forms or procedures (see earlier Vogon reference)

These barriers can be addressed through careful design and planning involving staff. Nevertheless, some staff will interpret any request process as a significant barrier to complete their work.

Each option has some interruptions to staff workflow; however, these barriers can be designed so that the security practices are not likely to become a risk within themselves. We forget at times that decisions around information security also need to consider the impact these decisions will have on the ability of staff to perform their daily duties. It’s easy to get in our own way if we forget to center the end-user (be it patrons or fellow library workers) in what we decide and what we build. Keeping the risk trade-offs in mind can help make sure we don’t end up tripping ourselves up trying to protect data one way, only to have it unprotected in the end.

Just Published – Data Privacy and Cybersecurity Best Practices Train-the-Trainer Handbook

Cover of the "Data Privacy and Cybersecurity Best Practices Train-the-Trainers Handbook".

Happy October! Depending on who you ask at LDH, October is either:

  1. Cybersecurity Awareness Month
  2. An excuse for the Executive Assistant to be extra while we try to work
  3. The time to wear flannel and drink coffee nevermind, this is every month in Seattle

Since the Executive Assistant lacks decent typing skills (as far as we know), we declare October as Cybersecurity Awareness Month at LDH. Like last year, this month will focus on privacy’s popular sibling, security. We also want to hear from you! If there is an information security topic you would like us to cover this month (or the next), email us at newsletter@ldhconsultingservices.com.

We start the month with a publication announcement! The Data Privacy and Cybersecurity Training for Libraries, an LSTA-funded collaborative project between the Pacific Library Partnership, LDH, and Lyrasis, just published two library data privacy and cybersecurity resources for library workers wanting to create privacy and security training for their libraries:

  • PLP Data Privacy and Cybersecurity Best Practices Train-the-Trainer Handbook – The handbook is a guide for library trainers wanting to develop data privacy and cybersecurity training for library staff. The handbook walks through the process of planning and developing a training program at the library and provides ideas for training topics and activities. This handbook is a companion to the Data Privacy Best Practices Toolkit for Libraries published last year.
  • PLP Data Privacy and Cybersecurity Best Practices Train-the-Trainer Workshops (under the 2021 tab) – If you’re looking for train-the-trainer workshop materials, we have you covered! You can now access the materials used in the two train-the-trainer workshops for data privacy and cybersecurity conducted earlier this year. Topics include:
    • Data privacy – data privacy fundamentals and awareness; training development basics; vendor relations; patron programming; building a library privacy program
    • Cybersecurity – cybersecurity basics; information security threats and vulnerabilities; how to protect the library against common threats such as ransomware and phishing; building cybersecurity training for libraries

Both publications include extensive resource lists for additional training materials and to keep current with the rapid changes in cybersecurity and data privacy in the library world and beyond. Feel free to share your training stories and materials with us – we would love to hear what you all come up with while using project resources! We hope that these publications, along with the rest of the project’s publications, will make privacy and cybersecurity training easier to create and to give at your library.

Is Library Scholarship a Privacy Information Hazard?

A white hazard sign with an image of a human stick figure being sapped by a electric blob. Image is sandwiched between red and black text - "Warning, this area is dangerous"
Image source: https://www.flickr.com/photos/andymag/9349743409/ (CC BY 2.0)

Library ethics, privacy, and technology collided again last week, this time with the publication of issue 52 of the Code4Lib Journal. In this issue, the editorial committee published an article describing an assessment process with serious data privacy and ethical issues and then explained their rationale for publishing the article in the issue editorial. The specifics of these data privacy and ethical issues will not be covered in-depth in this week’s newsletter – you can read about said issues in the comment section of the Code4Lib Journal article in question.

You might have noticed that we said “again” in the last paragraph. This isn’t the first time library technology publications and patron privacy collided. The Code4Lib Journal published a similarly problematic article last year, but the journal is one of many library scholarship venues that have published scholarly and practical literature that are ethically problematic with regard to patron privacy. Technology and assessment are the usual offenders, ranging from case studies of implementing privacy-invasive technologies to research extolling the benefits of surveilling students in the name of learning analytics without discussing the implications of violating student patron privacy. These publications are not set up as a point-counterpoint exploration of these technologies and assessment methods in terms of privacy and ethics. Instead, these publications are entered into the scholarly record as is, with an occasional contextual note or superficial sentence or two about privacy. Retraction is almost unheard of in library scholarship, and retraction is not very effective in addressing problematic research.

Library scholarship is not consistently aligned with the profession’s ethical standards to uphold patron privacy and confidentiality. Whether or not an article is judged on its potential impact on library privacy is currently up to the individual peer reviewer (or in the case of editor-reviewed journals such as Code4Lib, the editor). In addition, library scholarship is not set up to assess the potential privacy risks and harms of the publication in question to specific patron groups, particularly patrons from minoritized populations. Currently, there is no suitable mechanism to do such an assessment that can be included in the original publication so that it would be both meaningful and informative to the reader. We are left with publications in the library scholarship record that promote the uncritical adoption of high-risk practices that go against professional ethics and harm patrons. This becomes more perilous when these publications come across those in the field who do not have the knowledge or experience in assessing these publications with patron privacy and ethics in mind.

What we end up with, therefore, is a scholarly record full of information hazards. An information hazard is a particular piece of information that can potentially cause harm to the knower or create the potential to harm others. This differs from misinformation where the information being spread is false, whereas the truthfulness of the information hazard is intact. Nick Bostrom’s seminal work on information hazards breaks down the specific risks and harms of different types of hazards. Library scholarship has (at least) two information hazards in particular when it comes to library privacy and ethics:

Idea hazard – Ideas hold power. They also come with risks. Even if the dissemination of an idea is kept at a high level without specific details, it can become an idea hazard. The idea that a library can use a particular system or process to assess library use can risk patron privacy. There are ways to mitigate an idea risk of this nature, including evaluating the assessment idea through the Five Whys method or other methods to determine the root need for such an assessment.

Development hazard – A development hazard is when advancement in a field of knowledge leads to technological or organizational capabilities that create negative consequences. Like other fields of technology, library technology falls into this hazard category, particularly when combined with the evolution of library assessment practices and norms. Sharing code and processes (which is a data hazard) can lead to community or commercial development of more privacy-invasive library practices if no care is taken to mitigate patron privacy risks.

How, then, can library scholarship become less of a privacy information hazard? First and foremost, the responsibility falls on the publishers, editors, peer reviewers, and conference program organizers who control what is and is not added to the library scholarly record. This includes creating a code of ethics for submission authors to follow and guidelines for reviewers and editors to follow to assess the privacy and ethical implications of the submission. However, these codes and guidelines are not effective if they are not acted upon. As Dorothea Salo says, “Research on library patrons that contravenes library-specific ethics is unethical; it should not be published in the LIS literature, and when published there, should be retracted.” Regardless of the novelty or other technical merits of the submission, if the submission violates or goes against library ethics or privacy standards, the editors, reviewers, and publishers have the responsibility as shapers of the scholarly record to not publish the submission lest they add yet another information hazard to the record.

Library privacy and ethics must also be a part of every stage of the submission and publication process. This takes a page from Privacy by Design, taking a proactive approach to privacy instead of rushing to include privacy at the last minute, making any privacy effort ineffective at best. Ethical codes and guidelines are one way to embed privacy into a process; another is to include checkpoints in the process to bring in external subject matter experts to review submissions well in advance to identify or comment on specific privacy or ethical risks. If done early in the submission process, the information received can then be used to revise the submission to address these issues or to change the focus of the submission to one that is more appropriate to address the privacy and ethical implications of the topic at hand. The submission itself doesn’t have to be abandoned, but it must be constructed so that the privacy and ethical risks are front and center, describing why this method, idea, process, or code goes against library ethics and privacy. This option doesn’t eliminate the idea/data hazard, but shifting the focus on privacy and ethical repercussions can mitigate the risks that come with such hazards.

Whether intentional (as in the case of the latest Code4Lib Journal issue) or unintentional, library scholarship places patron privacy at risk through the unrestricted flow of information hazards. Many in the profession face pressure to create a constant stream of scholarship, but at what cost to our patrons’ privacy and professional ethics? A scholarly record full of privacy information hazards has and will continue to have long-lasting implications for the profession’s ability to protect patron privacy as well as how well we can serve everyone in the community (and not just those who have a higher tolerance for privacy risks or won’t be as negatively impacted by poor privacy practices). As the discussion about the Code4Lib Journal’s decision to publish the latest information hazard into the scholarly record continues, perhaps the community can use this time to push for more privacy and ethically-aligned submission and review processes in library scholarship.