Skip to main content

Privacy and Usability in Pervasive Computing Environments

Researchers: Norman Sadeh, Lujo Bauer

Cross Cutting Thrusts: Usable Privacy and Security

Abstract

Privacy and Usability in Pervasive Computing Environments

As pervasive computing technologies become increasingly widespread and as public awareness of associated information collection practices grows, acceptance of these environments and practices will have to be accompanied by a significantly richer, yet understandable, set of privacy options. What is needed is a framework within which rich, context-sensitive policies can be captured and enforced to control both notification and revelation processes, while minimizing the burden on users. We propose to develop and validate: an open, decentralized trust management infrastructure for enforcing rich, context-sensitive privacy notification and disclosure/revelation policies case-based conversational interfaces to semi-automatically capture user privacy preferences. This project builds on ongoing research by members of the team. Solutions will be validated in the context of pervasive computing infrastructures already deployed or in the process of being deployed on CMU’s campus (Aura, MyCampus and Grey projects).

Project Overview

Everyday environments are increasingly being equipped with a dizzying array of sensors and applications that can be used to track and collect information about different aspects of our daily activities. The motivations for deploying elements of this pervasive computing infrastructure include security, convenience, marketing, productivity, and entertainment. Pervasive computing environments give rise to complex privacy challenges that intertwine technical, usability, and policy issues. From an information disclosure perspective, privacy protection can be viewed as the combination of two complementary processes and associated policies:

  • A notification process aimed at informing users about the types of information that may be collected about them and the terms according to which this information will be treated (e.g. for what purpose the information is collected and with what entities the information might be shared). On the Web, privacy notifications are becoming commonplace (even though the clarity of these notifications is often dubious). Public areas have also started to display signs notifying people that they are being watched by cameras. It is easy to imagine that, in the future, some of these notifications could be replaced by “beacons” through which sensors and applications would advertise machine-readable policies “a la P3P” [Cra05, Tyg03]. In some situations, the notification process might be accompanied by an option to accept, reject, or even negotiate some terms of the advertised policy
  • A controlled revelation process whereby information about a user is disclosed to entities requesting access to it under the terms advertised during the notification process.

Most pervasive computing environments developed so far have skirted privacy issues. Those that have attempted to address them have primarily focused on “black-and-white” context-insensitive options (e.g. users have to decide between making their location available to any number of applications or forfeit the potential benefits of services based on location tracking functionality; or they can make this information available to a list of “buddies” but cannot selectively adjust the accuracy at which this information is disclosed). Yet, empirical evidence suggests that people’s privacy preferences are complex and nuanced. A user may be willing to have his location in a building or, perhaps even his vital signs monitored, but only if this information is used for emergencies. A user might want to selectively control the granularity at which her location is disclosed, allowing her colleagues to see the exact room she is in while on company premises and during office hours, while only allowing them to see the city she is in otherwise. Advanced scene recognition software might be used to enforce selective revelation policies, whereby facial recognition applications are only being used following the identification of a suspicious scene (e.g. a crime). Accordingly, we believe that, as pervasive computing technologies become increasingly widespread, and, as public awareness of associated information collection practices grows, acceptance of these environments and practices will have to be accompanied by a significantly richer yet understandable set of privacy options. What is needed is a framework within which rich, context-sensitive policies can be captured and enforced to control both notification and revelation processes, while minimizing the burden on users.

Specifically, our project will produce the following two innovations:

  • An Open, Decentralized Trust Management Infrastructure for Enforcing Rich, Context-Sensitive Privacy Notification and Revelation Policies: The environment will support the specification and enforcement of rich, context-sensitive notification and revelation policies. It will allow for contextual restrictions that refer to any number of concepts defined in ontologies of contextual attributes and will leverage ontologies of contextual sources of information (modeled as semantic web services). Our infrastructure will build on security elements and protocols for distributed proof-carrying authorization developed by Bauer and Reiter in the Grey project, on semantic web reasoning and dynamic service identification technologies for context-sensitive policies developed by Sadeh in the MyCampus project, and on Steenkiste’s work on a decentralized SPKI-based access control infrastructure for pervasive computing in Aura. These technologies will be integrated to develop an infrastructure where proof building, service discovery, remote invocation and proof verification can be opportunistically interleaved subject to different meta-control strategies. In particular, our framework will support the dynamic discovery of resources (e.g. sensors or applications) that can be used to check the value of contextual attributes and verify whether a given policy is satisfied, as users move across different pervasive computing environments (e.g. from your office to your car, and to the airport). The framework will not be limited to access control policies but will also include support for evaluation of context-sensitive obfuscation policies, information collection policies and notification policies.
  • Usable Interfaces and Supporting Technologies to Capture Context-Sensitive Policies and Selectively Notify Users: We will design and evaluate novel user interfaces that empower people with effective control over and feedback about what personal information is being collected about them and how that information is being used. Our approach will combine iterative user-centered design concepts with conversational case-based reasoning, which we believe can significantly alleviate the burden of specifying one’s privacy preferences. This work will build on the expertise and prior work of Hong, Cranor, McLaren, and Sadeh.

A significant aspect of our project will be to integrate and validate these solutions in the context of pervasive computing environments we have been developing. This includes Aura, MyCampus, and Grey. These environments already feature a number of prototype applications that range from simple “nearest X” applications (e.g. nearest available conference room) to more sophisticated ones such as a meeting scheduler or a context-aware messaging service that dynamically selects among multiple possible delivery channels (e.g. e-mail, instant messaging, SMS) based on the recipient’s context. The environments have already been partially instrumented to support careful evaluation of a number of key performance and usability issues. Physical infrastructure elements needed for the Grey Project (for example, Bluetooth-equipped electronic door locks) are being integrated into CMU’s new Collaborative Innovation Center (CIC) building. Many of the faculty and students with offices in CIC will be given smart phones, allowing them to use Grey applications as they are developed. As we conduct our work, a central objective will be to evaluate the interplay between the privacy-preserving technologies we propose to develop and how different configurations of these technologies can help mitigate complex tradeoffs between privacy, security, usability, performance, and functionality.

Overall Architecture and Relevant Privacy Policies

In our architecture end-users can interact with the infrastructure either directly (e.g. walking into a room, entering the subway system) or indirectly via agents to which they delegate tasks (e.g. a general-purpose user-agent such as a micro-browser on a smart phone, policy evaluation and notification agents, or task-specific agents such as a context-aware message filtering agent). The infrastructure provides a set of resources generally tied to different geographical areas, such as printers, surveillance cameras, campus-based location tracking functionality, and so on. These resources are all modeled as services that can be automatically discovered based on rich ontology-based service profiles advertised in service directories and accessed via open APIs. Each service and agent has an owner, whether an individual or an organization, who is responsible for setting policies for the service or agent.

One new component that will be added to our architecture is the notion of automatic disclosures. Services that can collect information about users will typically broadcast disclosure messages that inform target users (or more specifically, their agents) about the operation of the service (e.g. users who enter a smart room). Some disclosures are one-way announcements: they simply inform the user that information is collected about them and possibly how that information is used. Other disclosure messages may give the user some options. For example, a location-tracking service may give the user the choice of opting out. Alternatively, the user may be able to allow tracking, while limiting the use of his or her location information (e.g. only for emergency use). Ideally, a policy disclosure evaluation agent will be able to respond to disclosures automatically, based on the user’s policies (e.g. opting out). The same agent should also be able to occasionally notify its user of policies that might lead her to modify her behavior, as well as prompt its user to manually select among possible options when needed.

Policies of particular interest include:

  • Access control policies that limit access only to entities that can be proved to satisfy certain conditions.
  • Obfuscation policies that associate different levels of quality or resolution to different sets of credentials.
  • Information collection policies (in P3P, these policies are simply referred to as Privacy Policies) that specify what type of information is collected by a service, for what purpose, how that information will be stored, etc.
  • Notification Preference Policies specifying under which conditions a user may want to be alerted about the presence of sensors or other information collection applications.

Collectively, these policies enable users and organizations to manage their privacy practices, specifying what information they are willing to disclose (access control) and at what level of granularity (obfuscation), and notifying users or their agents about the information they collect and about what happens to that information.

Enforcing Rich, Context-Sensitive Trust Policies in Open Pervasive Computing Environments (Task 1)

The deployment of pervasive computing environments featuring an ever broader range of sensors, devices and applications is expected to result in a demand for increasingly richer sets of context-sensitive trust policies by government, enterprises and the public at large. Checking whether these policies are satisfied will require moving away from ad-hoc verification algorithms to more powerful logic-based frameworks that can capture a rich set of considerations (e.g. selecting what sources to use for contextual information, determining whom to trust for the purpose of verifying a particular policy, etc.). This will also require proof building techniques that can operate according to significantly less scripted scenarios than is the case today.  For example, the system should be able to opportunistically interleave reasoning with the search for additional sources of information required to determine the values of contextual attributes. The benefit of such an approach is that it will lead to new levels of openness, allowing individuals and organizations to enforce policies in new and changing environments.  For example, users entering a new environment should feel confident that the policies they “carry with them” can still be enforced.  Another goal is to be able to reason about the provenance of facts not just in terms of whether they have been signed by entities we generically trust but also in terms of whether the signing entity may possibly have a “conflict of interest” in contributing to a particular proof.  For example, if Bob has control over the location tracking functionality Mary’s agent relies on to determine whether the two of them are in the same building, her agent may need  to  look for another, more independent source of information.

Our work will build on prior research by members of the project team. This includes work by Norman Sadeh on semantic web technologies capable of interleaving reasoning and dynamic service invocation functionality to enforce ontology-based context-sensitive policies. Our project will directly leverage Grey’s logic-based decentralized trust management infrastructure developed by Mike Reiter and Lujo Bauer and work by Peter Steenkiste in  Aura. Initial work will involve using semantic web technologies to express and reason about context-sensitive privacy policies and service invocation rules. We will combine these technologies with extensions of protocols and proof-carrying authorization functionality developed as part of the Grey decentralized trust management infrastructure.The product of this work will be a semantic extension of Grey’s decentralized trust management infrastructure, where reasoning, semantic web service discovery and remote invocation can be opportunistically interleaved to enforce rich (ontology-based) context-sensitive policies. As our work matures, we expect to experiment with different meta-control strategies to orchestrate these processes. This will include prioritizing the order in which different services are accessed as well as exploring opportunities for concurrency (e.g. querying multiple candidate services at the same time or building speculative proofs that assume that some facts are true while they are concurrently being verified). In the process, we will evaluate tradeoffs between openness (e.g. using simple service invocation rules versus more open-ended discovery processes), expressiveness (e.g. web service profiles used to describe different sources of contextual information) and efficiency. This will include exploring architectural tradeoffs in exploiting the complementarity of Grey’s small-footprint, proof-carrying authorization framework and MyCampus’s more expressive semantic web technologies for interleaving reasoning and dynamic service discovery. Empirical evaluation will be done through a combination of actual scenarios that leverage the pervasive computing infrastructure of the Aura, MyCampus and Grey projects as well as through simulations involving randomly generated variations of these scenarios (e.g. different distributions of actors, services, policies, different mappings between services and policies such as different levels of decentralization, and different request distributions).

Conversational Case-Based Reasoning Interface to Capture Privacy Preferences (Task 2)

Our goal is to design useful and usable user interfaces (UI) that give people control over and feedback about what personal information is being collected and shared. We want to build interfaces that give users the control they desire without requiring them to attend to frequent prompts or spend a lot of time configuring the interface. The interface should also allow users to determine easily when their personal information is exposed. However, designing a simple and effective UI that gives people control over their privacy is difficult for several reasons: user privacy preferences are often highly context-sensitive, users tend to have difficulty articulating their privacy preferences, and users often do not understand the privacy-related consequences of their behavior. In addition, users cannot always anticipate situations in which they may want to specify privacy preferences in advance, and they do not want to devote a lot of time to configuring privacy preferences. To address these challenges, we plan on developing several new interaction techniques for managing one’s privacy preferences, as well as for maintaining awareness of what is being collected. This research will build on prior work by Lorrie Cranor, Jason Hong, Bruce McLaren, and Norman Sadeh.

In particular, we propose to develop and evaluate case-based reasoning (CBR) functionality aimed at capturing privacy preferences with minimum added burden on the user. In CBR, past “cases” are used to guide decision-making in new situations. Cases are representations of past scenarios and are continually added to the case base as the problem solver confronts new situations. CBR is a symbolic approach to problem solving, providing rich representations of scenarios and a means for retrieving those scenarios. The symbolic nature of the cases allows the retrieved cases to be altered and combined to better match new situations. Our idea is to provide advice or automated support regarding preferences in situations where a requested service does not match a user’s privacy preferences. For example, a user might have a default preference not to allow anyone outside her immediate family to receive her location information on weekends. However, she is attending a conference on a Saturday and is trying to coordinate informal meetings with other conference attendees. If she attended a previous weekend conference and allowed conference attendees to receive her location information, the CBR might be able to generalize from the past case to determine what to do in the present case. In other words, our specific use of CBR will be to try to find exceptions to the user's default privacy preferences by accessing the user's past history. Exceptions are past cases with similar contextual attributes (e.g., a similar or the same requestor, the same general time of day, etc.) in which the user made privacy decisions counter to those prescribed by the default preferences. If an exception exists, then the CBR system will attempt to acquire or infer additional information based on the exceptional cases to determine the preferences to apply. After handling the current situation, the new exceptional case will be added to the case base for future reference and, potentially, the default preferences will be adjusted.

We will use a conversational case-based reasoning (CCBR) approach to conduct dialogues with the user. Questions from the system to the user and the user’s answers to those questions will be used to incrementally refine and retrieve past cases. Additional information acquired through conversation will provide evidence for eliminating or lowering the probability of some cases and/or including or increasing the probability of others. Our research goal is to make conversations efficient and short. In particular, we plan to experiment with dialogue inferencing and dialogue termination techniques.

Deliverables, Including Transition Plans

Task 1:

  1. Pilot decentralized trust management infrastructure for enforcing rich, context-sensitive privacy revelation/disclosure and notification policies along with baseline meta-control strategies for opportunistically interleaving service discovery, remote service access and reasoning.
  2. A report on the above detailing the evaluation of different design tradeoffs. Content of the report will be submitted for publication to one or more top-tier conferences/journals

Task 2:

  1. Pilot conversational case-based reasoning functionality for capturing user privacy preferences along with preliminary evaluation of interface and embedded functionality
  2. A report on the above detailing the evaluation of different design tradeoffs. Content of the report will be submitted for publication to one or more top-tier conferences/journals

As this technology matures, we expect to transition it into prototype pervasive computing environments deployed on CMU’s campus, including Aura, Grey and MyCampus. In addition, we will continue to interact with members of the CyLab consortium to pursue additional opportunities to transfer our technology.