Ransomware, CS and Privacy, and #FollowMonday

Welcome to this week’s Tip of the Hat! Summer is in full swing this August, and the Executive Assistant is contemplating where would be the coolest place in the office to park herself to work. While she roams the office and while I make sure she doesn’t make a small blanket fort connected to the office refrigerator, here are a couple of quick links and updates in the privacy and library worlds to start your week.

A refrigerator with its door open, and a green tent set up in front of the open door.
Ransomware strikes another library system

Last month, the Butler County Federated Library System in Pennsylvania became the latest library system to succumb to ransomware. As a result, the system has gone back to using paper to track circulation information. Like other ransomware attacks, the system might have to rebuild their online infrastructure if they are unable to retrieve the ransomed data.

If your library hasn’t been hit with ransomware yet, the best defense against ransomware is to prevent it from taking over your system. Awareness programs and information security training can help with educating staff about the ways that ransomware and other viruses and malware can infiltrate the library system, and regular reminders and updates can also help keep staff current on trends and new infosec practices.

Training can only go so far, though, and having a plan in place will not only help mitigate panic when ransomware takes over a system, but also mitigate any overlooked vulnerabilities concerning patron data privacy. For example, while libraries have used paper for decades to track circulation information, automation in the last few decades has taken over this process. Making sure that staff are trained and have current procedures in handling sensitive patron data in paper format – including storage and disposal – can help protect against inadvertent privacy breaches.

H/T to Jessamyn West for the link!

Is it time for Computer Science curriculums to prioritize privacy?

In an op-ed in Forbes, Kalev Leetaru argues that CS curriculum should follow the way of library and information science and emphasize privacy in their programs. Near the end of the article, Leetaru illustrates the struggle between privacy and analytics:

Privacy naturally conflicts with capability when it comes to data analytics. The more data and the higher resolution it is, the more insight algorithms can yield. Thus, the more companies prioritize privacy and actively delete everything they can and minimize the resolution on what they do have to collect, the less capability their analytics have to offer.

This represents a philosophical tradeoff. On the one hand, computer science students are taught to collect every datapoint they can at the highest resolution they can and to hoard it indefinitely. This extends all the way to things like diagnostic logging that often becomes an everything-or-nothing concept that has led even major companies to have serious security breaches. On the other hand, disciplines like library and information science emphasize privacy over capability, getting rid of data the moment it is safe to do so.

What do you think? Would emphasizing privacy in CS programs change current data privacy practices (or lack thereof) in technology companies?

#FollowMonday – @privacyala

Keeping up with all the latest developments in the privacy field is a challenge. There is so much happening that it can be a full-time job to keep up with all the developments. ALA’s Choose Privacy Every Day Twitter account can help you sift through all the content in a nicely packaged weekly post of the major developments and updates in the privacy world, be it in libraries or out there in the world. You can find out about new legislation, tools to help protect your patrons’ privacy, and yes, there is a section to keep up with the latest data breaches.

Humans, Tech, and Ethical Design: A Summit Reflection

Welcome to this week’s Tip of the Hat!

Last Saturday LDH attended the All Tech Is Human Summit with 150+ other technologists, designers, ethics professionals, academics, and others in discussing issues surrounding technology and social issues. There were many good conversations, some of which we’re passing along to you all as you consider how your organization could approach these issues.

The summit takes inspiration from the Ethics OS Toolkit which identifies eight risk zones in designing technology:

  1. Truth, Disinformation, Propaganda
  2. Addiction & the Dopamine Economy
  3. Economic & Asset Inequalities
  4. Machine Ethics & Algorithmic Biases
  5. Surveillance State
  6. Data Control & Monetization
  7. Implicit Trust & User Understanding
  8. Hateful & Criminal Actors

Each risk zone has the potential to create social harm, and the Toolkit helps planners, designers, and others in the development process to mitigate those risks. One of the ways you can mitigate risk in many of the areas in the design process (like the Data Control and Surveillance zones) is incorporating privacy into the design and development processes. Privacy by Design is an example of integrating privacy throughout the entire process, instead of waiting to do it at the end. Much like technical debt, incorporating privacy and other risk mitigation strategies throughout the design and development process will lessen the need for intensive resource investment on short notice when something goes wrong.

Another way to approach ethical design comes from George Aye, co-founder of the Greater Good Studio. In his lightning talk, George identified three qualities of good design:

  • Good design honors reality
  • Good design creates ownership
  • Good design builds power

Viewed through a privacy lens (or, in the case of LDH, with our data privacy hat on), these qualities can also help approach designers and planners in addressing the realities surrounding data privacy:

  • Honoring reality – how can the product or service meet the demonstrated/declared needs of the organization while honoring the many different expectations of privacy among library patrons? Which patron privacy expectations should be elevated, and what is the process to determine that prioritization? What societal factors should be taken into account when doing privacy risk assessments?
  • Creating ownership – how can the product or service give patrons a sense that they have ownership over their data and privacy? How can organizations cultivate that sense of ownership through various means, including policies surrounding the product? For vendors, what would it take to cultivate a similar relationship between library customers and the products they buy or license?
  • Building power – building off of the ownership questions, what should the product or service do in order to provide agency to patrons surrounding data collection and sharing when using the product or service? What data rights must be present to allow patrons control over their interactions with the product or process? Libraries – how can patrons have a voice in the design process, including those more impacted by the risk of privacy harm? Vendors – how can customers have a voice in the design process? All – how will you ensure that the process will not just be a “mark the checkbox” but instead an intentional act to include and honor those voices in the design process?

There’s a lot to think about in those questions above, but the questions illustrate the importance of addressing those questions while still in the design process. It’s hard to build privacy into a product or services once the product is already out there collecting and sharing high-risk data. Addressing the hard ethical and privacy questions during the design process not only avoids the pitfalls of technical debt and high-risk practices, but also provides the valuable opportunity to build valuable relationships between libraries, patrons, and vendors.