Cyber defense is an art in human deception
Daniel Tkacik
Oct 8, 2020
For CyLab’s Coty Gonzalez, protecting networks against cyberattacks is a game of deception.
“Make valuable targets look worthless, make worthless targets look valuable,” Gonzalez says. “Basically, you want to do anything you can to confuse and slow down the attacker.”
Gonzalez, a research professor in Social and Decision Sciences (SDS), and her group presented two studies on cyber deception at this week’s Human Factors and Ergonomics Society annual meeting (HFES2020). CyLab director Lorrie Cranor gave the meeting’s keynote talk.
Human attackers aren’t as rational as we thought
In their first paper, Gonzalez and her team of researchers, including lead author and SDS post-doctoral researcher Palvi Aggarwal, put one type of deception defense strategy to the test.
The strategy, referred to as “masking,” changes the appearance of computers on a network as well as their value to the attacker. Depending on an attacker’s arsenal of tools and exploits, they will plan an attack based on what vulnerable machines are on the network, what value they will gain if successful, and their probability that their attack will work.
But in a “masked” network, what the attacker sees may not be real. A Windows machine may be disguised as a Linux machine. What looks like valuable information on an Ubuntu machine may actually be worthless data on an Xbox.
“In designing a masking technique, we assumed that the attacker behaves rationally—that they would target machines that would earn them the most value,” says Gonzalez. “But interestingly, that assumption turned out to be wrong. Human attackers did not behave rationally.”
Human attackers did not behave rationally.
Coty Gonzalez, research professor, Social and Decision Sciences
The researchers developed a defense strategy using machine learning and game theory, optimized to benefit the defender by minimizing losses. They developed an experimental network in which every computer was assigned (1) a description of its operating system, (2) a point value that an attacker would gain if successful, and (3) the probability of a successful attack.
The researchers then “masked” the computers in their network using their optimal masking strategy. By doing so, their descriptions, point values, and probabilities appeared different than they were in reality—in such a way that attackers would gain less than they would have if the masking had not been applied.
Thirty human participants were then asked to play the role of cyberattackers in one of two networks: one in which the optimal masking strategy was deployed, and one in which a random, non-optimized masking strategy was deployed, as a point of comparison.
Turns out, human attackers aren’t as rational as the researchers thought they’d be, as more often than not, they went after the low-hanging fruit—in other words, they overwhelmingly attacked the machines with the highest chance of success, even if its value was very low. In fact, Gonzalez says this pattern persisted in both the optimally and randomly masked networks.
The human attackers were biased towards attacking safer targets.
Coty Gonzalez, research professor, Social and Decision Sciences
“The human attackers were biased towards attacking safer targets,” says Gonzalez. “A rational attacker would more frequently go after targets with higher expected values, but that didn’t happen. The assumption that attackers will act rationally is incorrect.”
Gonzalez says these findings will help inform the development of more effective defense masking strategies that account for humans’ bounded rationality in the future.
Phishing emails: how they deceive us to click
In Gonzalez’s group’s second paper with SDS post-doctoral researcher Kuldeep Singh as the lead author, they analyzed a database of phishing emails and so-called “ham” emails—emails that are not actually spam, or any non-phishing email—to study their similarities and differences in attempt to help end-users become better at spotting phishing emails.
“We found that a major problem is regarding similarity—when humans confront a phishing email that is similar to a non-phishing one,” Gonzalez says. “Similarity has to do with the email content, but it also has to do with how the links look, what the email address looks like, and the personalization of the email.”
Gonzalez says that one major point of their study is to inform the development of end-user models that are able to predict how people respond phishing emails. Traditionally, people are given training programs to learn some tips on phishing email detection, but the trainees are usually in a different mindset; they’re aware that they’re doing a training program, and that awareness is problematic.
“You might have the training, but then you go back to your company and you click on a phishing email,” Gonzalez says. “Why? Because the context is now different. We could be more effective in training individuals if we design these programs that are based on experience and surrounded by real-world context, and cognitive models can help us predict when people would click on a phishing email.”
How to deceive humans safely and ethically in user studies
How do you create security and privacy user studies that simulate realistically-risky environments without actually harming the users? This was at the heart of Lorrie Cranor’s keynote talk at HFES2020.
User studies are absolutely critical in gauging how easy or difficult it is for users to interact with privacy and security tools and configurations in software. But getting users to behave as if they would in a real-life risky situation can be challenging.
“In our research, we have used a variety of strategies to overcome these challenges and place participants in situations where they will believe their security or privacy is at risk, without subjecting them to increases in actual harm,” Cranor says.
In her talk, Cranor discussed some of the ways she and her research group, the CyLab Usable Privacy and Security Laboratory, have conducted user studies that account for risk in realistic ways. For example, Cranor has sent fake phishing emails to participants who were told they were participating in a study about online shopping, and displayed simulated web browser pop-up warnings to participants who had been asked to evaluate video games.
Paper references
An Exploratory Study of a Masking Strategy of Cyberdeception Using CyberVAN
- Palvi Aggarwal, Carnegie Mellon University
- Omkar Thakoor, University of Southern California
- Aditya Mate, Harvard University
- Milind Tambe, Harvard University
- Edward Cranford, Carnegie Mellon University
- Christian Lebiere, Carnegie Mellon University
- Cleotilde “Coty” Gonzalez, Carnegie Mellon University
What makes phishing emails hard for humans to detect?
- Kuldeep Singh, Carnegie Mellon University
- Palvi Aggarwal, Carnegie Mellon University
- Prashanth Rajivan, University of Washington
- Cleotilde “Coty” Gonzalez, Carnegie Mellon University