Skip to main content

 

about SOUPS

lorrie CranorLorrie Cranor is the founder of the Symposium on Usable Security and Privacy (SOUPS.) Dr. Cranor also does work on privacy enhancing technologies, as well as Internet policy issues. She is working on applications of the Platform for Privacy Preferences (P3P), anti-phishing, and privacy and security policy management, among other things. Lorrie chaired the P3P Specification working group and designed the Privacy Bird P3P user agent.

[ email ] | [web]

CyLab Chronicles

Glimpses into the Fourth Annual Symposium on Usable Security and Privacy (SOUPS)

posted by Richard Power

Amidst thunderstorms and summer sunshine, researchers and security professionals from academia, industry and government throughout the world descended on Carnegie Mellon University for the fourth annual Symposium on Usable Security and Privacy.SOUPS 2008

In a recent interview with CyLab Chronicles, Dr. Lorrie Cranor, the founder of both SOUPS and CyLab’s Center for Usable Security and Privacy, articulated the conference’s purpose and importance: “I organized a workshop on usable privacy and security in 2004, and the people who attended were eager for more. I realized there was a growing community interested in this research area and a need for a high-quality peer-reviewed conference. So, I decided to start the Symposium on Usable Privacy and Security. The conference program includes technical paper presentations, workshops, a keynote speaker, and a poster session. … We also have parallel "discussion sessions" that allow for interactive discussions on topics of interest to attendees. People always tell me how much they enjoy exchanging ideas with other attendees at the discussion sessions and poster session, as well as at the SOUPS meals and social events.” (Read the full interview with Dr. Cranor.)

The keynote address for this year’s SOUPS, “Towards a Science of Security and Human Behavior,” was delivered by Cambridge University’s Ross Anderson. Anderson is one of the founders of a vigorously-growing new discipline: the economics of information security. But his work has now gone beyond economics into the social sciences in his search for clarity and understanding about what is and is not happening in the security space: “The interaction between security and psychology is not limited to the usability of protection mechanisms: it ranges from the misperceptions of risk that make our societies vulnerable to terrorism, to quite basic questions such as the extent to which we evolved intelligence in order to deceive, and to detect deception in others.”

Here are a few excerpts from Anderson’s remarks:

“Back in the 1990s, we all believed that the Internet was insecure because it didn’t have enough features. There wasn’t enough crypto or protocols or filtering or PKI. Round about the end of the last century, some of us realized that this wasn’t enough, things weren’t getting any better. … We began to realize that security often fails because the incentives were wrong; and this emboldened us to start thinking about whether we could answer other, deeper, perhaps more important problems, such as “Why is Microsoft’s software so insecure despite their market dominance?”

"… So the new view of security that has emerged is that stuff often breaks because the incentives are wrong. The people who guard the systems or who could fix them don’t have the motivation to do so. If banks can dump the cost of fraud on their customers, then they no longer have the incentive to go and fix those systems. If it is casino web sites that suffer when infected PCs run Distributed Denial of Service (DDOS) attacks against them, then the owners of those PCs will be less likely to go and buy Anti-Virus (AV) software."

“When you are building your network monopoly, you have to appeal to the vendors of your complimentary products. If you’re Bill [Gates] back in 1985, you have got to persuade every software vendor in America to write for the IBM PC first, and for the Mac second; and that means that you have got to make your system very easy to code for, which means the last thing you want in it is complicated security. … And we have seen this pattern again and again in one platform after another after another …”

View Ross Anderson’s presentation.  Download the presentation video.

Technical Paper chairs Jason Hong (Carnegie Mellon CyLab) and Simson Garfinkel (US Naval Postgraduate School) presented the “Best Paper” award to Philip Inglesant and M. Angela Sasse of University College, London, and David Chadwick and Lei Lei Shi of the University of Kent, for “Expressions of Expertness: The Virtuous Circle of Natural Language for Access Control Policy Specification.”

Read “Expressions of Expertness: The Virtuous Circle of Natural Language for Access Control Policy Specification.”

Other interesting papers included "Can eye gaze reveal graphical passwords?" by Daniel LeBlanc, Sonia Chiasson, Alain Forget and Robert Biddle of Carleton University in Ottawa, Canada, and Enhancing Natural Language Policy Authoring by Kami Vaniea of Carnegie Mellon University, Clare-Marie Karat, John Karati and Carolyn Brodie of IBM T.J. Watson Research Center and Joshua B. Gross of University of Pennsylvania.

The research explores the relationship between click-points and gaze-points in click-based graphical passwords:

“Eye trackers, which detect eye movements on a screen, are becoming readily available. We hypothesize that gaze points gathered from any user could potentially be used to form an attack dictionary to guess other users’ graphical passwords. This may be possible because people tend to look at visual scenes in similar patterns. We conducted a lab study examining eye gaze patterns as users selected graphical passwords and then used this gaze data to form an attack dictionary. Surprisingly, we found that eye gaze is not a good predictor of passwords.”

Read “Can eye gaze reveal graphical passwords?”

In Enhancing Natural Language Policy Authoring, Vaniea and her colleagues document their investigation into “policy authors’ ability to take descriptions of changes to policy situations and author high-quality, complete policy rules that would parse with high accuracy." In her presentation, Vaniea elaborated: “Users have no way of verifying that what is stated in those policies is actually happening, and what is scarier is that organizations have no automated way to take a natural language policy and make sure that it is actually happening on their systems.

One of the goals of the group that I worked with at IBM is to try to create an end to end solution that will help people take natural language policies and translate them into policies that are controllable on the machine. …To do this, they created the SPARCLE policy workbench, which is a usable policy management tool.

SPARCLE looks at three different phases of the policy authoring process: First, the ability to author and create policies, and make sure those policies are correct and internally sound, and that they don’t have conflicts. Next, taking that existing policy, human and machine readable, and transfer it on to the back end, and make sure that it is actually enforced by the system. And then move it into the auditing phase. How do I make sure these policies are actually being followed? When an auditor comes in, they can look through my policies and my system and see how they match. … We created an interface, an alteration only to the authoring interface …We conducted a two group study, one group looked at the existing SPARCLE interface, as it was, the other group used the new interface for authoring policy… We put every single user through a training session … gave them a set of two scenarios to complete. The scenarios asked them to add and modify policy rules…

Read “Enhancing Natural Language Policy Authoring”

Next year’s SOUPS will be hosted by Google and held July 15-17, 2009 in Mountain View, CA.


See all CyLab Chronicles articles