FUD and Reality – Information Security and Open Source Software

A black cat and a grey tabby cat sit on top of a gray computer monitor. The top border of the monitor has a black and white sticker with the text "I <3 source code."
Image source: https://www.flickr.com/photos/miz_curse_10/1404420256/ (CC BY SA 2.0)

Librarians like our acronyms, but we’re not the only profession to indulge in linguistic gymnastics. The technology field is awash in acronyms: HTTP, AWS, UI, LAN, I/O, etc. etc. etc. One acronym you might know from working in libraries, though, is OSS – Open Source Software.

Library technology is no stranger to OSS. The archived FOSS4LIB site lists hundreds of free and open source library applications and systems ranging from integrated library systems and content management systems to metadata editing tools and catalogs. Many libraries use OSS not specific to libraries – a typical example is installing Firefox and Libre Office on public computers. Linux and its multitude of distributions ensure that many library servers and computers run smoothly.

It’s inevitable, though, that when we talk about OSS, we run into another acronym – FUD, or Fear, Uncertainty, and Doubt. FUD is commonly used to create a negative picture of the target in question, usually at the gain of the person making the FUD. In the technology world, OSS often is depicted by proprietary software companies as being inferior to proprietary software – the Microsoft section in the FUD Wikipedia page gives several good examples of such FUD pieces.

It should be no surprise that FUD exists in the library world as well. One example comes from a proprietary software company specializing in library management systems (LMS). We’ll link to an archived version of the page if the page is taken down soon after this post is published; if nothing else, companies do not like being called out on their marketing FUD. The article poses as an article talking about the disadvantages of an LMS. In particular the company claims that OS LMSes are not secure: they can be easily breached or infected by a computer virus, or you can even lose all your data! The only solution to addressing all these disadvantages is to have the proprietary software company handle all of these disadvantages for you!

The article is a classic example of OSS FUD – the use of tactics to sow fear, hesitation, or doubt without providing a reasoned and well-supported argument about the claims made in the article. However, this is probably not the first time you ran into the idea that OSS is insecure. A talking point about OSS insecurity is OSS security bugs stay unaddressed in the software for years. For example, the Heatbleed bug that caused so much havoc in 2014 was introduced into the OpenSSL code in 2012, resulting in a two-year gap where bad actors could exploit the vulnerability. You’ve also probably run into various versions of the thinking around OSS security that Bruce Schneier describes below:

“Open source does sound like a security risk. Why would you want the bad guys to be able to look at the source code? They’ll figure out how it works. They’ll find flaws. They’ll — in extreme cases — sneak back-doors into the code when no one is looking.”

OSS is open for all to use, but it’s also available for all to exploit if you go down the path described in the above line of thinking.

The good news is that, despite the FUD, OSS is not more insecure than its proprietary counterparts. However, we also must be weary of the unchecked optimism in statements claiming that OSS is more secure than proprietary software. The reality is that OS and proprietary software are subject to many of the same information security risks mixed with the unique risks that come with each type of software. It’s not uncommon for a small OSS project to become dormant or abandoned, leaving the software vulnerable due to a lack of updates. Conversely, a business developing proprietary software might not prioritize security tests and fixes in its work, leaving their customers vulnerable if someone exploits a security bug. While there are differences between the two examples, both share the risk of threat actors exploiting unaddressed security bugs in the software.

OSS, therefore, should be assessed and audited like its proprietary counterparts for security (and privacy!) practices and risks. The nature of OSS requires some adjustment to the audit process to consider the differences between the two types of software. A security audit for OSS would, for example, take into account the health of the project: maintenance and update schedules, how active the community is, what previous security issues have been reported and fixed in the past, and so on. Looking at the dependencies of the OSS might uncover possible security risks if a dependency is from an OSS project that is no longer maintained. Addressing any security issues that might arise from an audit could take the form of working on and submitting a bug fix to the OSS project or finding a company that specializes in supporting OSS users that can address the issue. As we wrap up Cybersecurity Awareness Month in the runup to Halloween, let’s get our scares from scary movies and books and not from OSS FUD.

Cybersecurity Awareness Month News Update: School Cybersecurity, Passwords, and Crying “Hack!”

A small gray tabby kitten paws at the Mac laptop screen displaying the Google search home page, with its hind paws standing on the keyboard.
Image source: https://www.flickr.com/photos/tahini/5810915356/ (CC BY 2.0)

There’s never a dull moment in Cybersecurity Awareness Month, with last week being no exception. Here are some news stories you might have missed, along with possible implications and considerations for your library.

K-12 cybersecurity bill signed into law

You might remember reading about a new federal cybersecurity bill being signed into law. You remembered correctly! On October 8th, the K-12 Cybersecurity Act of 2021 was signed into law. The Act doesn’t have a set of standards to comply with for schools looking for such a list. Instead, the Act tasks the Cybersecurity and Infrastructure Security Agency (CISA) to study cybersecurity risks in K-12 educational institutions and what practices would best mitigate cybersecurity risks. The recommendations will be published along with a training toolkit for schools to use as a guide to implement these recommendations at their institution.

School libraries collect and store student data in several ways – the most common example being the patron record in the ILS. School libraries also heavily rely on third-party content providers, which in turn collect additional student data on both the library’s side and the third-party vendor’s side. School library workers, stay tuned for updates on the study and recommendations! While it’s unsure if the study will include school library systems and considerations into assessing cybersecurity risks, it’s more than likely that any recommendations that come from the study will affect school libraries.

Sharing all the passwords

You should be using a password manager. You might already be using one for your personal accounts, but are you using a password manager for work? If you’re still sharing passwords with your co-workers through spreadsheets or pieces of paper, it’s past time for your library to use a password manager. Several password managers, such as LastPass and Bitwarden, have business or enterprise products that are well-suited for managing passwords in the office. Not all password managers can share passwords and other sensitive information outside of the app, particularly if the other person doesn’t have an account with the same manager that you are using. There will be times where you want to share a password with someone outside your organization – a typical example is when a vendor needs to log into a system or app to troubleshoot an issue. But, for the most part, the password manager only supports secured sharing between people with accounts in the organization, so you’re stuck with sharing passwords in less secure ways.

However, if you are a 1Password user or your library uses 1Password’s business product, you no longer have this problem! 1Password users can now send account login information – including passwords – to anyone, including those who do not have a 1Password account. This new feature allows 1Password users to create a sharable link, with options to restrict sharing to specific people (email addresses) and when the link expires (anywhere between 30 days to after one person views the link)— no more calling the vendor, no more having staff email passwords in plaintext. Nonetheless, if your library wants to make use of this new feature, it’s best to give staff guidance as to how to create the links, including how to restrict access and expiration settings, along with training and documentation.

When a “hack” isn’t a hack

This news update is more of a “cybersecurity education 101” than news, considering the level of 🤦🏻‍♀️ this story contains. A very brief overview of what happened in Missouri last week:

  1. A reporter from the St. Louis Post-Dispatch found that a Department of Elementary and Secondary Education’s website contained the social security numbers (SSNs) of school teachers and administrators for the public to access through the HTML source code for the site.
  2. The newspaper notified the department about the security flaw, and the department took down the site in question.
  3. After the site was taken down, the newspaper published a story about the exposed SSNs on the now-defunct site.

Okay, so far, so good. Someone found a serious cybersecurity issue on a government website, reported it to the department, and waited to talk about the issue until the issue was addressed publicly. That’s pretty standard when it comes to disclosing security flaws. Let’s move on to the next item in the summary.

  1. The Governor of Missouri and other government officials responded to the disclosure, saying the reporter was a hacker and that the “hacker took the records of at least three educators, decoded the HTML source code, and viewed the social security number of those specific educators.”

🤦🏻‍♀️

There is a difference between hacking and exposing personal data on a publicly accessible website. The system was hacked if the reporter bypassed security measures to obtain sensitive data in an internal system, such as using stolen account logins to access the system. If the reporter clicks on the “View Source” menu option in their browser and finds sensitive data right in the source code of a publicly accessible website, you have a security vulnerability resulting in a data leak!

The takeaways from this story:

  1. Do not hard-code sensitive data in your code. This includes passwords for scripts that need to access other systems or databases.
  2. Review locally-developed and third-party applications that work with sensitive data for potential data leaks or other ways unauthorized people can improperly access the data.
  3. Do not punish the people who bring security issues to your attention! Like we discussed in our Friendly Phishing post, punitive actions can lead to a reduction in reporting, which increases security and privacy risks. Other reporters or private citizens who are watching the Governor take action against the reporter might be dissuaded from reporting additional data security or privacy issues to the state government, increasing the chance that these issues will be exploited by bad actors.
  4. If the data was sitting on a publicly available site for someone to access via F12 or View Source on their browser, it is not a hack. Let this be a lesson learned, lest you want to end up being ratio’ed like the Governor of Missouri on Twitter.

Information Security, Risk, and Getting in One’s Own Way

Maru demonstrating how certain information security measures can ultimately backfire and put the organization at risk if the measures add too many barriers for the user to go about their work. Source – https://twitter.com/rpteixeira/status/1176903814575796228

Let’s start this week’s Cybersecurity Awareness Month post with a phrase that will cause some of you to scream into the void and others to weep at your work desk quietly:

Admin privileges on work computers.

Rationing admin privileges on work computers is one example of an information security practice that both protects and puts data at risk. Limiting the worker’s ability to install a program on their work computer reduces the chances of the system falling to a cyberattack via malware. It also reduces the chances of critical operations or process failure if an app downloaded without IT support breaks after an OS update or patch. On the other hand, limiting admin privileges can motivate some workers to work around IT, particularly if IT has consistently denied requests for privileges or installing new tools or if the request process resembles something that only a Vogon would conceive of.  These workarounds put data at risk when staff work around IT to use third-party software with which the library has no contractual relationship or vendor security oversight. No contractual relationship + no evaluation of third-party privacy policies or practices = unprotected data.

IT is often their own worst enemy when it comes to information security. Staff don’t like barriers, particularly ones they see as arbitrary or prevent them from doing their jobs. Each information security policy or practice comes with a benefit and a cost in terms of risk. Sometimes these practices and standards have hidden costs that wipe out any benefit they offer. In the example of computer admin privileges, restrictions might lead workers to use personal computers or use third-party applications that the organization hasn’t vetted.  We have to calculate that risk with the benefit of reducing the chances of malware finding its way into the system.

The benefit-cost calculation comes back to the question of barriers, particularly what they are, how your policies and processes contribute to them, and the solutions or workarounds to navigate those barriers. Answering this question requires us to revisit the risk equation of calculating the cost or impact of a threat exploiting a vulnerability and how one can address the risk. By eliminating one risk through the barrier of disallowing admin privileges for staff computers, the organization accepts the multitude of risks that come with staff using personal devices or third-party applications or systems to work around the barrier.

Some barriers (for example, requiring authentication into a system that stores sensitive data) are necessary to reduce risk and secure data. The hard part comes in determining which barriers will not cost the organization more in the long run. In the case of admin privileges, we might consider the following options:

  • Creating two user accounts for each staff person: a regular account used for daily work and one local administrator account used only to install applications. The delineation of accounts mitigates the risk of malware infecting the local computer if the staff person follows the rules for when to use each account. The risk remains if the staff person uses the same password for both accounts or uses the admin account for daily work. Password managers can limit risks associated with reused passwords.
  • Creating a timely and user-friendly process for requesting and installing applications on work computers. This process has many potential barriers that might prevent staff from using the process, including:
    • long turnaround times for requests
    • lack of transparency with rejected requests (along with lack of alternatives that might work instead)
    • unclear or convoluted request forms or procedures (see earlier Vogon reference)

These barriers can be addressed through careful design and planning involving staff. Nevertheless, some staff will interpret any request process as a significant barrier to complete their work.

Each option has some interruptions to staff workflow; however, these barriers can be designed so that the security practices are not likely to become a risk within themselves. We forget at times that decisions around information security also need to consider the impact these decisions will have on the ability of staff to perform their daily duties. It’s easy to get in our own way if we forget to center the end-user (be it patrons or fellow library workers) in what we decide and what we build. Keeping the risk trade-offs in mind can help make sure we don’t end up tripping ourselves up trying to protect data one way, only to have it unprotected in the end.

Just Published – Data Privacy and Cybersecurity Best Practices Train-the-Trainer Handbook

Cover of the "Data Privacy and Cybersecurity Best Practices Train-the-Trainers Handbook".

Happy October! Depending on who you ask at LDH, October is either:

  1. Cybersecurity Awareness Month
  2. An excuse for the Executive Assistant to be extra while we try to work
  3. The time to wear flannel and drink coffee nevermind, this is every month in Seattle

Since the Executive Assistant lacks decent typing skills (as far as we know), we declare October as Cybersecurity Awareness Month at LDH. Like last year, this month will focus on privacy’s popular sibling, security. We also want to hear from you! If there is an information security topic you would like us to cover this month (or the next), email us at newsletter@ldhconsultingservices.com.

We start the month with a publication announcement! The Data Privacy and Cybersecurity Training for Libraries, an LSTA-funded collaborative project between the Pacific Library Partnership, LDH, and Lyrasis, just published two library data privacy and cybersecurity resources for library workers wanting to create privacy and security training for their libraries:

  • PLP Data Privacy and Cybersecurity Best Practices Train-the-Trainer Handbook – The handbook is a guide for library trainers wanting to develop data privacy and cybersecurity training for library staff. The handbook walks through the process of planning and developing a training program at the library and provides ideas for training topics and activities. This handbook is a companion to the Data Privacy Best Practices Toolkit for Libraries published last year.
  • PLP Data Privacy and Cybersecurity Best Practices Train-the-Trainer Workshops (under the 2021 tab) – If you’re looking for train-the-trainer workshop materials, we have you covered! You can now access the materials used in the two train-the-trainer workshops for data privacy and cybersecurity conducted earlier this year. Topics include:
    • Data privacy – data privacy fundamentals and awareness; training development basics; vendor relations; patron programming; building a library privacy program
    • Cybersecurity – cybersecurity basics; information security threats and vulnerabilities; how to protect the library against common threats such as ransomware and phishing; building cybersecurity training for libraries

Both publications include extensive resource lists for additional training materials and to keep current with the rapid changes in cybersecurity and data privacy in the library world and beyond. Feel free to share your training stories and materials with us – we would love to hear what you all come up with while using project resources! We hope that these publications, along with the rest of the project’s publications, will make privacy and cybersecurity training easier to create and to give at your library.

Something You Have/Know/Are: Multifactor Authentication

Welcome to this week’s Tip of the Hat!

Cybersecurity Awareness Month wouldn’t be complete if we didn’t talk about authentication! Traditionally a perennial topic for cybersecurity training, authentication was also in the news last week with the allegation of a well-known security researcher breaking into a presidential candidate’s Twitter account. The researcher, who also broke into the candidate’s account in 2016, was able to break into the account by brute force, trying out possible passwords based on what he knew of the candidate. In both cases, multifactor authentication was not turned on. If the allegation is true, the candidate did not learn from the 2016 hack, leaving his account vulnerable for all these years.

Why is multifactor authentication (MFA) important? The following is an excerpt from our April post on the LITA Blog where we explain what MFA is, why it’s important, and how to implement it alongside other cybersecurity measures!

Multifactor authentication

Our community college district has required access to our LSP, Alma, that requires multi-factor authentication when used through our single sign on provider. Can you talk a little bit about the benefits of multi-factor authentication?

Multifactor authentication, or MFA, is an authentication method that requires at least two out of the three types of items:

  • Something you know, like your password
  • Something you have, like your phone with an authentication app or like a physical key such as a YubiKey
  • Something you are, like your fingerprint, face, voice, or other biometric piece of information

(FYI – More MFA methods are adding location-based information to this list [“Somewhere you are”].)

MFA builds in another layer of protection in the authentication process by requiring more than one item in the above list. People have a tendency to reuse passwords or to use weak passwords for both personal and work accounts. It’s easy to crack into a system when someone reuses a password from an account that was breached and the password data subsequently posted or sold online. When combined with two-factor authentication (2FA), a compromised reused password is less likely to allow access to other systems.

While MFA is more secure than relying solely on your traditional user name and password to access a system, it is not 100% secure. You can crack into a system that uses SMS-based 2FA by intercepting the access code sent by SMS. Authentication apps such as Duo help address this vulnerability in 2FA, but apps are not available for people who do not use smartphones. Nonetheless it’s still worthwhile to enable SMS-based 2FA if it’s the only MFA option for your account.

This all goes to say that you shouldn’t slack on your passwords because you’re relying on additional information to log into your account. Use stronger passwords or passphrases – ideally randomly generated by Diceware – and do not reuse passwords or passphrases. Check out this video by the Electronic Freedom Foundation to learn more about Diceware and how it works. It’s a good way to practice your dice rolls for your next tabletop gaming session!

As a reminder – your security is only as strong as your weakest security practice, so once you have created your password or passphrase, store it in a password manager to better protect both your password and your online security.

Silent Fatigue

Welcome to this week’s Tip of the Hat!

Cybersecurity Awareness Month wouldn’t be complete without a post about a current cybersecurity threat. This month we learned that Silent Librarian is making the rounds right on time for the start of the academic school year.

Academic libraries encountered Silent Librarian last year, where several prominent universities were targeted by this phishing attack. Silent Librarian targets students and academic staff/faculty by sending an email that appears to be from the library, stating that their library account is going to expire and that the recipient needs to click on a link to reactivate it. If the user clicks the link and tries to log into the spoofed site with their university account, the attacker can then use this account to gain access to the university network and other sensitive systems.

Last week, Malwarebytes reported the first round of attacks for the 20/21 academic year. The attack follows roughly the same pattern from previous years; however, this year is a bit different due to the current chaotic state that many universities are in due to the pandemic. The attackers can take advantage of the confusion and disorder caused by the rapidly changing plans of on/off-site teaching, access to academic resources, and changing restrictions and guidelines set by campus officials. 

The fatigue caused by all of these changes can change how a person behaves and potentially lower the person’s ability to protect their digital security. This fatigue is a boon for attackers because the behavior changes lead people to be less diligent about cybersecurity – people may not be checking email messages before clicking on a link in a phishing email, for example. It’s difficult to prevent this fatigue with everything going on in the world and harder to recover from once fatigue sets in. 

This year’s Cybersecurity Awareness Month comes at a time where information security and privacy folks have to be mindful about over-relying on individual responsibility. Advice to combat this security fatigue usually center around what the individual should do, but what happens if the individual is already overwhelmed? This fatigue is not new – research has shown that users mentally check out when they are presented end-user agreements and privacy policies. The user can only do so much if they are distracted and overwhelmed by, well… everything that’s going on in 2020.

Users have a part to play in protecting data, but solely putting the burden of security on the end-user can create a vulnerability that is hard to fix in an organization when fatigue sets in. For libraries, this would be a good time to check what cybersecurity measures are in place and where the organization can alleviate some of this fatigue in staff. In the last two weeks, we explored different types of cybersecurity training – it might be a good time to create reminders or training that use positive reinforcement and motivate staff to be proactive in securing the library’s data. It’s also a good time to check firewalls, spam filters, and other email and network security settings to identify and block phishing emails, particularly repeat attackers such as Silent Librarian. Creating checklists for staff using personal devices for work purposes, as well as checklists for staff doing remote work, can help already overwhelmed staff in ensuring that they are not putting library data and networks at risk. Even smaller actions such as a checklist can go a long way in reducing data security and privacy risks. Providing any assistance to users at this time will not force users to spend all their energy (or, in some cases, spoons) trying to do all the things to protect data on their own, quickly leading to burnout and increased risk to data security.

Roll for Initiative! Gaming in Cybersecurity Training

Welcome to this week’s Tip of the Hat!

We learned last week that cybersecurity training is not as simple as choosing a particular training and rolling it out – training methods, goals, and context all determine the effectiveness of the training. While interactive training engages trainees and helps with understanding and motivation, the type of interaction matters. Simulations such as the phishing simulation test can backfire if not planned and deployed with care, but other types of interactive training engage users in a more controlled space and minimize unintended consequences… and you might level up in the process.

Games in training are not new, but turning training into a game by incorporating game elements or using existing games to teach particular concepts has grown in popularity in the last couple of decades. You’ve encountered gamification in other areas of your life – badges, leaderboards, and point systems, to name a few. These elements play into common human desires and motivations, such as collaboration/competition and accomplishment, which in turn can boost morale and knowledge retention. When combined with story elements and a positive reinforcement approach, training with game elements have a better chance overall of being more effective than traditional lecture-based training.

Libraries are no stranger to gamification. Academic, school, and public libraries use gamification for instructional sessions as well as patron programs. ALA has a Games and Gaming Round Table, as well as several resources for libraries, including two new books published this year about gamification in academic libraries and ready to use gamified programs for libraries of all types. It wouldn’t be a big stretch, therefore, for libraries to incorporate game elements or entire games into a training program, including cybersecurity training.

What does gamification look like in security and privacy training? Here are a few examples that you can use for both staff and patrons:

  • Tally Saves the Internet – This browser extension turns the Internet into a turn-based RPG where you fight an invisible enemy – online trackers. Players not only gain points and badges for fighting these online tracker monsters but also actually blocks trackers 😊
  • Cybersecurity Training for Youth Using Minecraft: A Field Guide – You can use existing games to teach cybersecurity, too! This field guide provides ways in which library staff can use Minecraft to teach patrons threat modeling in a way that doesn’t require prior knowledge of cybersecurity concepts but instead uses an environment the patrons might already be familiar with in their daily lives.
  • Tabletop exercises – unlike the other two examples above, tabletop exercises (TTE) have been around for a while in the cybersecurity world. One common TTE in cybersecurity is incident response, going through how an organization would respond to a particular scenario, such as a data breach. Think of it as a one-shot TRPG, but you role play as yourself, and your abilities and inventory consist of whatever policies, procedures, and resources you have in your organization at that moment. You can include other gaming elements and methods within TTE, such as Lego Serious Play, for additional collaborative/competitive opportunities in the scenario.
  • Cybersecurity games – There are several off-the-shelf cybersecurity games that you can use in existing training or at game night at your library!

There are many paths to incorporate game elements into cybersecurity training, so the best approach to take is to, well, play around and find which ones best fit your training audience. Don’t forget to have fun in the process, and may the dice roll in your favor!

Friendly Phishing, or Should You Phish Your Own Staff?

Welcome to this week’s Tip of the Hat!

October is a very important month. Not only does October mean Halloween (candy), it also means Cybersecurity Awareness Month. This month’s TotH posts will focus on privacy’s popular sibling, security. We start this month by focusing on one common “trick” – phishing – and why not all cybersecurity training is created equal.

A hooded middle aged white man wearing sunglasses laughs as he holds a fishing pole with a USB drive at the end of the line.
This is also the month where we get to use our favorite phishing stock photo. Image source: https://www.flickr.com/photos/hivint/36953918384/.

We wrote more about phishing in a previous post if you need a refresher; the tl;dr summary is that phishing is a very common attack method to gain access to a variety of sensitive systems and data by pretending to be an email from a trusted source (business or person). Phishing can be very costly on both a personal level (identify theft) and an organizational level (ransomware, data breach, etc.), so it’s no wonder that any digital security training spends a considerable amount of time on teaching others on how to spot a phishing email and what to do to prevent being phished.

It turns out that this type of training, for the amount of time spent in covering avoiding phishes, might not be as effective, and in some cases, can actively go against the goal of the training itself. A good portion of cybersecurity training comes in the way of lectures or an online web module, where users listen/read the information and are then tested to assess understanding. While that has been the main mode of training in the past, lecture/quiz style training, trainers realize that interactive training that goes beyond this model can be more effective in knowledge retention and understanding.

A growing number of organizations are using another type of security training – sending out phishing emails without warning to their employees. The phishing email, created by an external cybersecurity training company or by the local training team, would be sent out to spoof ether an organizational email or an email from a trusted source. This live test, theoretically, would more accurately assess employees’ knowledge and awareness of phishing methods and provide on-the-spot results, which could include corrections or remedial training. There are a variety of vendors offering both free and paid tools and services, such as KnowBe4 and PhishingBox.

Simulated phishing tests appear like a great addition to your organization’s training approach; however, these simulated tests can backfire. One way it can backfire is turning staff against the organization. One recent example of this comes from a simulated phishing email sent to Tribune Publishing staff, promising staff a chance of a company bonus if they clicked on the enclosed link. This email was sent out after staff went through furloughs and other drastic budget cuts, and the staff reaction to this email led to further erosion of trust between employees and administration. The debate extended to the security field, questioning the ethics of using content that otherwise is used in common phishing emails in an organization where employees went through considerable stress due to budget cuts. 

Another way simulated phishing tests can backfire is when the tests focus on shaming or negative outcomes. Some phishing tests focus on those who do not spot the phish, providing on the spot corrective training or assigning the employee to a future training. However, research has shown that focusing on shaming to correct behavior doesn’t work in the long term and might lessen the chance of someone reporting a possible phishing email or other cybersecurity issues to the organization. Negative reinforcement serves to create a more insecure organization by creating an environment where staff either are not motivated to or fear reprimand if they report a cybersecurity issue.

The use of simulated phishing tests will be the topic of debate for some time, but this debate presents two takeaway points to consider for any type of cybersecurity training:

  1. Context and methods matter – simulated tests can be effective, but the test’s logistics – including timing and content – can work against the desired outcomes of the trainers. Trainers should also consider the current state of the organization, such as staff morale and major crises/events in the organization, in choosing and developing cybersecurity training for staff. Another thing to consider is the effectiveness of training methods, including how often training has to be repeated to keep staff current on cybersecurity threats and procedures.
  2. Positive reinforcement – positive reinforcement, such as awarding staff members who do not click on the test phish email, can help with creating a more security-conscious organization. 

Next week we will dive into another type of cybersecurity training that is a simulation of another kind – stay tuned!