Cross Cutting Thrusts: Usable Privacy and Security
Privacy and Usability in Pervasive Computing Environments
As pervasive computing technologies become increasingly widespread and as public awareness of associated information collection practices grows, acceptance of these environments and practices will have to be accompanied by a significantly richer, yet understandable, set of privacy options. What is needed is a framework within which rich, context-sensitive policies can be captured and enforced to control both notification and revelation processes, while minimizing the burden on users. We propose to develop and validate: an open, decentralized trust management infrastructure for enforcing rich, context-sensitive privacy notification and disclosure/revelation policies case-based conversational interfaces to semi-automatically capture user privacy preferences. This project builds on ongoing research by members of the team. Solutions will be validated in the context of pervasive computing infrastructures already deployed or in the process of being deployed on CMU’s campus (Aura, MyCampus and Grey projects).
Everyday environments are increasingly being equipped with a dizzying array of sensors and applications that can be used to track and collect information about different aspects of our daily activities. The motivations for deploying elements of this pervasive computing infrastructure include security, convenience, marketing, productivity, and entertainment. Pervasive computing environments give rise to complex privacy challenges that intertwine technical, usability, and policy issues. From an information disclosure perspective, privacy protection can be viewed as the combination of two complementary processes and associated policies:
Most pervasive computing environments developed so far have skirted privacy issues. Those that have attempted to address them have primarily focused on “black-and-white” context-insensitive options (e.g. users have to decide between making their location available to any number of applications or forfeit the potential benefits of services based on location tracking functionality; or they can make this information available to a list of “buddies” but cannot selectively adjust the accuracy at which this information is disclosed). Yet, empirical evidence suggests that people’s privacy preferences are complex and nuanced. A user may be willing to have his location in a building or, perhaps even his vital signs monitored, but only if this information is used for emergencies. A user might want to selectively control the granularity at which her location is disclosed, allowing her colleagues to see the exact room she is in while on company premises and during office hours, while only allowing them to see the city she is in otherwise. Advanced scene recognition software might be used to enforce selective revelation policies, whereby facial recognition applications are only being used following the identification of a suspicious scene (e.g. a crime). Accordingly, we believe that, as pervasive computing technologies become increasingly widespread, and, as public awareness of associated information collection practices grows, acceptance of these environments and practices will have to be accompanied by a significantly richer yet understandable set of privacy options. What is needed is a framework within which rich, context-sensitive policies can be captured and enforced to control both notification and revelation processes, while minimizing the burden on users.
Specifically, our project will produce the following two innovations:
A significant aspect of our project will be to integrate and validate these solutions in the context of pervasive computing environments we have been developing. This includes Aura, MyCampus, and Grey. These environments already feature a number of prototype applications that range from simple “nearest X” applications (e.g. nearest available conference room) to more sophisticated ones such as a meeting scheduler or a context-aware messaging service that dynamically selects among multiple possible delivery channels (e.g. e-mail, instant messaging, SMS) based on the recipient’s context. The environments have already been partially instrumented to support careful evaluation of a number of key performance and usability issues. Physical infrastructure elements needed for the Grey Project (for example, Bluetooth-equipped electronic door locks) are being integrated into CMU’s new Collaborative Innovation Center (CIC) building. Many of the faculty and students with offices in CIC will be given smart phones, allowing them to use Grey applications as they are developed. As we conduct our work, a central objective will be to evaluate the interplay between the privacy-preserving technologies we propose to develop and how different configurations of these technologies can help mitigate complex tradeoffs between privacy, security, usability, performance, and functionality.
Overall Architecture and Relevant Privacy Policies
In our architecture end-users can interact with the infrastructure either directly (e.g. walking into a room, entering the subway system) or indirectly via agents to which they delegate tasks (e.g. a general-purpose user-agent such as a micro-browser on a smart phone, policy evaluation and notification agents, or task-specific agents such as a context-aware message filtering agent). The infrastructure provides a set of resources generally tied to different geographical areas, such as printers, surveillance cameras, campus-based location tracking functionality, and so on. These resources are all modeled as services that can be automatically discovered based on rich ontology-based service profiles advertised in service directories and accessed via open APIs. Each service and agent has an owner, whether an individual or an organization, who is responsible for setting policies for the service or agent.
One new component that will be added to our architecture is the notion of automatic disclosures. Services that can collect information about users will typically broadcast disclosure messages that inform target users (or more specifically, their agents) about the operation of the service (e.g. users who enter a smart room). Some disclosures are one-way announcements: they simply inform the user that information is collected about them and possibly how that information is used. Other disclosure messages may give the user some options. For example, a location-tracking service may give the user the choice of opting out. Alternatively, the user may be able to allow tracking, while limiting the use of his or her location information (e.g. only for emergency use). Ideally, a policy disclosure evaluation agent will be able to respond to disclosures automatically, based on the user’s policies (e.g. opting out). The same agent should also be able to occasionally notify its user of policies that might lead her to modify her behavior, as well as prompt its user to manually select among possible options when needed.
Policies of particular interest include:
Collectively, these policies enable users and organizations to manage their privacy practices, specifying what information they are willing to disclose (access control) and at what level of granularity (obfuscation), and notifying users or their agents about the information they collect and about what happens to that information.
Enforcing Rich, Context-Sensitive Trust Policies in Open Pervasive Computing Environments (Task 1)
The deployment of pervasive computing environments featuring an ever broader range of sensors, devices and applications is expected to result in a demand for increasingly richer sets of context-sensitive trust policies by government, enterprises and the public at large. Checking whether these policies are satisfied will require moving away from ad-hoc verification algorithms to more powerful logic-based frameworks that can capture a rich set of considerations (e.g. selecting what sources to use for contextual information, determining whom to trust for the purpose of verifying a particular policy, etc.). This will also require proof building techniques that can operate according to significantly less scripted scenarios than is the case today. For example, the system should be able to opportunistically interleave reasoning with the search for additional sources of information required to determine the values of contextual attributes. The benefit of such an approach is that it will lead to new levels of openness, allowing individuals and organizations to enforce policies in new and changing environments. For example, users entering a new environment should feel confident that the policies they “carry with them” can still be enforced. Another goal is to be able to reason about the provenance of facts not just in terms of whether they have been signed by entities we generically trust but also in terms of whether the signing entity may possibly have a “conflict of interest” in contributing to a particular proof. For example, if Bob has control over the location tracking functionality Mary’s agent relies on to determine whether the two of them are in the same building, her agent may need to look for another, more independent source of information.
Our work will build on prior research by members of the project team. This includes work by Norman Sadeh on semantic web technologies capable of interleaving reasoning and dynamic service invocation functionality to enforce ontology-based context-sensitive policies. Our project will directly leverage Grey’s logic-based decentralized trust management infrastructure developed by Mike Reiter and Lujo Bauer and work by Peter Steenkiste in Aura. Initial work will involve using semantic web technologies to express and reason about context-sensitive privacy policies and service invocation rules. We will combine these technologies with extensions of protocols and proof-carrying authorization functionality developed as part of the Grey decentralized trust management infrastructure.The product of this work will be a semantic extension of Grey’s decentralized trust management infrastructure, where reasoning, semantic web service discovery and remote invocation can be opportunistically interleaved to enforce rich (ontology-based) context-sensitive policies. As our work matures, we expect to experiment with different meta-control strategies to orchestrate these processes. This will include prioritizing the order in which different services are accessed as well as exploring opportunities for concurrency (e.g. querying multiple candidate services at the same time or building speculative proofs that assume that some facts are true while they are concurrently being verified). In the process, we will evaluate tradeoffs between openness (e.g. using simple service invocation rules versus more open-ended discovery processes), expressiveness (e.g. web service profiles used to describe different sources of contextual information) and efficiency. This will include exploring architectural tradeoffs in exploiting the complementarity of Grey’s small-footprint, proof-carrying authorization framework and MyCampus’s more expressive semantic web technologies for interleaving reasoning and dynamic service discovery. Empirical evaluation will be done through a combination of actual scenarios that leverage the pervasive computing infrastructure of the Aura, MyCampus and Grey projects as well as through simulations involving randomly generated variations of these scenarios (e.g. different distributions of actors, services, policies, different mappings between services and policies such as different levels of decentralization, and different request distributions).
Conversational Case-Based Reasoning Interface to Capture Privacy Preferences (Task 2)
Our goal is to design useful and usable user interfaces (UI) that give people control over and feedback about what personal information is being collected and shared. We want to build interfaces that give users the control they desire without requiring them to attend to frequent prompts or spend a lot of time configuring the interface. The interface should also allow users to determine easily when their personal information is exposed. However, designing a simple and effective UI that gives people control over their privacy is difficult for several reasons: user privacy preferences are often highly context-sensitive, users tend to have difficulty articulating their privacy preferences, and users often do not understand the privacy-related consequences of their behavior. In addition, users cannot always anticipate situations in which they may want to specify privacy preferences in advance, and they do not want to devote a lot of time to configuring privacy preferences. To address these challenges, we plan on developing several new interaction techniques for managing one’s privacy preferences, as well as for maintaining awareness of what is being collected. This research will build on prior work by Lorrie Cranor, Jason Hong, Bruce McLaren, and Norman Sadeh.
In particular, we propose to develop and evaluate case-based reasoning (CBR) functionality aimed at capturing privacy preferences with minimum added burden on the user. In CBR, past “cases” are used to guide decision-making in new situations. Cases are representations of past scenarios and are continually added to the case base as the problem solver confronts new situations. CBR is a symbolic approach to problem solving, providing rich representations of scenarios and a means for retrieving those scenarios. The symbolic nature of the cases allows the retrieved cases to be altered and combined to better match new situations. Our idea is to provide advice or automated support regarding preferences in situations where a requested service does not match a user’s privacy preferences. For example, a user might have a default preference not to allow anyone outside her immediate family to receive her location information on weekends. However, she is attending a conference on a Saturday and is trying to coordinate informal meetings with other conference attendees. If she attended a previous weekend conference and allowed conference attendees to receive her location information, the CBR might be able to generalize from the past case to determine what to do in the present case. In other words, our specific use of CBR will be to try to find exceptions to the user's default privacy preferences by accessing the user's past history. Exceptions are past cases with similar contextual attributes (e.g., a similar or the same requestor, the same general time of day, etc.) in which the user made privacy decisions counter to those prescribed by the default preferences. If an exception exists, then the CBR system will attempt to acquire or infer additional information based on the exceptional cases to determine the preferences to apply. After handling the current situation, the new exceptional case will be added to the case base for future reference and, potentially, the default preferences will be adjusted.
We will use a conversational case-based reasoning (CCBR) approach to conduct dialogues with the user. Questions from the system to the user and the user’s answers to those questions will be used to incrementally refine and retrieve past cases. Additional information acquired through conversation will provide evidence for eliminating or lowering the probability of some cases and/or including or increasing the probability of others. Our research goal is to make conversations efficient and short. In particular, we plan to experiment with dialogue inferencing and dialogue termination techniques.
Deliverables, Including Transition Plans
As this technology matures, we expect to transition it into prototype pervasive computing environments deployed on CMU’s campus, including Aura, Grey and MyCampus. In addition, we will continue to interact with members of the CyLab consortium to pursue additional opportunities to transfer our technology.