Skip to main content

 

about anupam datta

Anupam DattaAnupam Data is a Research Scientist with CyLab, the Computer Science Department and the department of Electrical and Computer Engineering. The overarching goal of his research program is to provide a theoretically well-founded basis for security practice. The work of his research group provides rigorous definitions for what it means for a system to be "secure", and methods and mechanized tools for verifying whether a given system actually is "secure" in this precise sense.

[ email ] | [profile]

CyLab Chronicles

Q&A with Anupam Datta

posted by Richard Power

CyLab Chronicles: What are the goals of your Theory of Systems Project?

Anupam Datta: Contemporary secure systems are complex and designed to provide subtle security properties in the face of attack. Examples of such systems include hypervisors, virtual machine monitors, security kernels, operating systems, web browsers, and secure co-processor-based systems such as those utilizing the Trusted Computing Group’s Trusted Platform Module (TPM). But are such systems really secure? What does it even mean for a system to be "secure"? What security property does a given system provide and against what class of adversaries?

The goal of the Theory of Secure Systems (ToSS) project is to develop principled approaches for modeling, analysis, and design of secure systems --- their architectures and implementations. We will provide rigorous definitions of security and adversary models for systems, a relatively unexplored and yet very important aspect of system security. A specific goal is to develop a unified framework for modeling secure systems and adversaries in terms of their capabilities and methods for proving the absence of all attacks that an adversary with these capabilities can launch. This is in contrast to the common practice of enumerating known attacks and arguing that the system is not susceptible to them. Finding the right definitions of end-to-end security of systems is also a wide open area.

One significant challenge here is finding the right abstractions: whenever we work with formal models, there is always the question of whether the model faithfully reflects the essential features of the actual system. We plan to validate our models by applying them to real systems: if, for example, an analysis in the model identifies previously known and unknown attacks on systems, then that provides evidence of its usefulness. Another significant challenge with analysis methods for secure systems is scaling these methods to complex, large scale, real systems. We are exploring several approaches to address this problem, in particular, secure composition and security-preserving translations. Let me elaborate.

Reasoning about security of a complex system in a modular manner (i.e., by separately proving security of the components that it is built from and then combining these proofs) is a major open problem in computer security. We are currently working on this problem, building on our prior successful work on modular reasoning about secure network protocols. We are also exploring methods for security-preserving translations that will enable us to prove security properties of an abstract model of a complex system, but guarantee that those properties carry over to the actual software implementation of the system. One example of how this approach may work in practice can be found in programming language methods where a high-level language is used to write code which then gets translated down to low-level executable code. Programs in the high-level language are easier to analyze than programs in the low level language. But do the guarantees carry over? There has been some progress in carrying over such guarantees for correctness properties, but less so for security properties, in particular, because the adversary at the low level may have additional capabilities that the high-level adversary does not. Another example of such an approach can be found in software model checking methods where abstract models are automatically extracted from source code (e.g., programs written in C) in a way that if no bugs of a certain form are found in the abstract model, then it is guaranteed (under some additional assumptions) that no bugs exist in the source code. Such methods have been applied for establishing correctness properties of some systems, such as device drivers at Microsoft. But again, there is very little work along these lines for security properties of systems --- a gap that our work aims to fill.

These are some of the most important problems in computer security and any progress that we and others make on addressing them will go a long way towards producing a body of generalized, teachable knowledge on the science of security. There has been substantial progress in computer security foundations over the last two decades or so (the IEEE Computer Security Foundations Symposium has provided a forum for a steady stream of good ideas) and an increased interest in the broader security community. A recent round of discussions around this topic happened at the Science of Security Workshop organized by NSF, NSA, and IARPA at Berkeley in November, 2008.

CyLab Chronicles: What results do you already have and what kind of impact can this project have on the computer security industry?

Datta: We have made some concrete progress on these research questions. Our target secure systems include widely deployed industry standards in hardware-based security such as trusted computing technologies, secure network protocols such as SSL, hypervisors and virtual machine monitors, and web browser security.

In a recent paper published at the 2009 IEEE Symposium on Security and Privacy, we present a logic for reasoning about properties of secure system designs and use it to carry out a detailed analysis of trusted computing systems, an industrial standard developed by the Trusted Computing Group, a consortium of leading companies including Intel, AMD, IBM, and Microsoft. We model and analyze two trusted computing protocols that rely on Trusted Platform Modules (TPMs) to provide integrity properties: load-time attestation using a Static Root of Trust for Measurement (SRTM) and late launch-based attestation using a Dynamic Root of Trust for Measurement (DRTM). We formally define and prove code integrity and execution integrity properties of the attestation protocols. To the best of our knowledge, these are the first logical security proofs of these protocols. The formal proofs yield insights about the security of these protocols. The invariants used in the proofs make precise the properties that the Trusted Computing Base (TCB) must satisfy. We describe these invariants and manually check that an invariant holds on a security kernel implementation used in an attestation protocol. We demonstrate that newly introduced hardware support for late launch actually adversely affects the security of previous generation attestation protocols in the absence of additional protection. We describe an attack that utilizes hardware support for late launch to exploit load-time attestation protocols that measure software starting at system boot. We are now working on extending the framework to support adversaries with different capabilities, modular reasoning and security preserving translations.

An example of security analysis of system implementations can be found in our work on the ASPIER method and tool which is reported in a paper that will be presented in the 2009 IEEE Computer Security Foundations Symposium.Industrial network protocols such as SSL, TLS, Kerberos, IPSec, and IEEE 802.11i are designed to enable secure communication over untrusted networks. However, they are notoriously difficult to get right; the literature is replete with serious security flaws uncovered in protocols many years after they were first published. Over the last three decades, a variety of highly successful methods and tools have been developed for analyzing the security guarantees provided by network protocol designs. Independently, in recent years, there has been significant progress in automatically verifying non-trivial properties of software implementations. In this context, one successful technique is software model checking. We have developed ASPIER - the first framework that weds software model checking with a standard protocol security model to analyze authentication and secrecy properties of protocol implementations in an automated manner. We have implemented the method in the ASPIER tool and applied it successfully to establish authentication and secrecy properties of the OpenSSL implementation of the SSL handshake protocol. ASPIER detected the "version-rollback" vulnerability in OpenSSL source code and successfully verified the implementation when clients and servers are only willing to run SSL 3.0. While these are interesting initial results, significant challenges remain, in particular, to scale this method and tool to large scale software systems and to be more faithful to the semantics of C features such as function pointers. But that's good: it keeps us employed!


See all CyLab Chronicles articles