Skip to main content

Supporting Trust Decisions

Researchers: Lorrie Cranor, Norman Sadeh

Abstract

Supporting Trust Decisions

When Internet users are asked to make trust decisions they often make the wrong decision. Implicit trust decisions include decisions about whether or not to open an email attachment or provide information in response to an email that claims to have been sent by a trusted entity. Explicit trust decisions are decisions made in response to specific trust- or security-related prompts such as pop-up boxes that ask the user whether to trust an expired certificate, execute downloaded software, or allow macros to execute. The consequences of a wrong decision include infecting the user’s own computers with malware or spyware, unwittingly propagating malware, and revealing information to con artists and identity thieves. Attackers are able to take advantage of most users’ poor trust decision-making skills through a class of attacks known as “semantic attacks.” It is not always possible for systems to make accurate trust decisions on a user’s behalf, especially when those decisions require knowledge of contextual information (Zurko et al., 2002). Thus, there remain many trust decisions that users must make on their own, usually with limited or no assistance or protections from their computer. The goal of this research is not to make trust decisions for users, but rather to develop approaches to support users when they make trust decisions. Our proposed research builds on recent efforts to understand and combat semantic attacks, as well as the literature on modeling trust. Our research will begin with a mental models study aimed at understanding and modeling how people make trust decisions in the online context and ultimately result in the development and evaluation of new software.

Semantic Attacks and Approaches to Combating Them

Rather than taking advantage of system vulnerabilities, semantic attacks take advantage of the way humans interact with computers or interpret messages (Schneier 2000). Many semantic attacks, for example, phishing attacks, exploit difficulties people have at making good trust-related decisions when using computing systems that provide insufficient support for making these decisions. The focus of our research will be on finding ways to better support trust decisions related to email and web browsing, although we expect that our findings will be more generally applicable to other types of trust decisions and semantic attacks. While a lot of research has gone into mitigating syntactic attacks, much less research has been done to try to systematically understand and address semantic attacks. As Bruce Schneier (2000) put it, solutions in this area need “to target the people problem, not the math problem.” The “people problem” has been getting increasing attention in the media as phishing attacks continue to proliferate. Research that addresses this problem is likely to gain significant visibility.

Our interdisciplinary team of researchers affiliated with the CMU Usable Privacy and Security

Laboratory is well positioned to work on this problem. There has been little empirical testing of the effectiveness of anti-phishing toolbars (Chou, et al. 2004) or trust related user interface features. In one study, approximately half the participants were unable to use browser cues to recognize a secure web connection (Freidman, et al. 2002). Other studies have raised concerns about the ability of users to understand certificates (Whitten and Tygar 1999; Herzberg and Gbara 2004; Zurko et al, 2002). Studies have shown that at least one in four consumers are unable to identify phishing emails presented as part of a study—it is not clear whether people do better interpreting messages in their own email box, where they would have additional contextual information (Howell 2004). These studies all suggest that users may have trouble interpreting the currently available cues that could help them make better trust decisions. On the other hand, recent work focused on providing security- and privacy-related cues that users will understand shows some progress. Herzberg and Gbara (2004) report some very preliminary results for TrustBar that show some promise. Ye and Smith (2002) conducted a very small user study of a visual system for detecting browser spoofing attacks. The authors tested several variations on the system and noted that the dynamic variation, in which the visual cues changed periodically, was most usable. In addition, through an iterative design process we have developed a tool that helps users interpret privacy policies at P3P-enabled web sites. We will apply knowledge from this process to supporting trust decisions (Cranor, et al., 2005; Cranor et al., 2002).

Modeling Trust

Understanding how individuals make online trust decisions is a necessary step in the development of methodologies and applications to support cybertrust. Studies in psychology, sociology, and economics (e.g., Luhmann 1979; Misztal 1996; Avery et al., 1999; Friedman and Resnick 2003; Guerra et al., 2003) have highlighted the impact of trust and (sense of) security on social relations and economic transactions. Trust decisions are affected by several factors, including the contextual information available to the individual and her subjective valuations of the consequences of affording trust (or not) to a certain source. Proposed models of trust and trust-based decision making take into consideration factors such as the perceived risk of a situation, the involvement and motivation of the subject who needs to trust another entity, and the credibility of the entity which needs to be trusted (Marsh 1994; Mayer, et al., 1995; Gefen 2002; Riegelsberger, et al, in press). In particular, previous work on modeling trust in the online environment has focused on identifying factors that tend to motivate trust, and has highlighted the role of credibility (Stanford et al., 2002; Fogg et al., 2001). While these studies will inform our work, they do not directly address our questions related to trust. We are specifically interested in what causes people to misplace trust. Hence we begin our analysis with a preliminary model of trust decision making based on decision tree analysis (Raiffa 1968). The underlying assumption of this approach is that trust based decisions are influenced by what the user knows about the possible risks, whether she cares about the risk or not, whether she believes she can avoid the risk, whether she believes her efforts to avoid the risk are worthwhile, and whether a mental (and most likely not formal) analysis of benefits shows that it is in her interest to proceed or not with a transaction. A decision tree analysis approach tries to determine how different factors may account for the decision process of the individual and the reasons or paths for her possible mistakes.

Proposed Research

We anticipate that this project will be a multi-year effort. In the first year we expect to complete:

  • An expert model of how Internet users make trust decisions.
  • A set of mental models interviews that will inform the development of a composite mental model of how Internet users make trust decisions (we expect to continue to develop this model in subsequent years through by conducting surveys and laboratory experiments).
  • A prototype embedded training systems for educating end-users about Internet scams.
  • A prototype system for supporting collaborative trust decisions for detecting and responding to outbreaks