Skip to main content

Secure Software Development and Operation

Researchers: David Brumley, Maverick Woo, Manuel Egele

Cross Cutting Thrusts: Software Security | Formal Methods


In a broad sense, the notion of secure software development encompasses several disciplines of computer science and engineering. These include programming languages and techniques for secure coding, formal specification verification of software designs and implementations, detection and removal of software vulnerabilities, methods and tools for security-property analysis of software programs in all their forms (e.g., design specifications, source code, binaries). Here we illustrate a single research area that belongs to this large and fruitful research discipline.

The specific goal of this research is to determine whether a given commercially available software system or application runs approved code that was vetted not to contain known malicious functionality. To achieve this goal, one would have to i) identify the attack surface of the system, including programs that may contain latent vulnerabilities or hidden malicious code, ii) vet that each program on the attack surface is malice-free with respect to the deployed system configuration, and iii) allow an analyst to reliably and efficiently determine that a fielded system is running the vetted and approved software. Achieving these goals requires a whole-system (i.e., holistic) approach.

How is this done today, and what are the limitations? Currently, there are very few systematic and coherent approaches for vetting entire software systems and applications. Instead, research has focused on more specific tasks such as checking a single program, modeling the attack surface of a single application, and software-based attestation. These techniques are important, but insufficient in isolation. For example, static program analysis is insufficient for deployed commercial software for at least three reasons. First, static analysis is typically agnostic to the system the program runs on and the type of network connection the system establishes. Second, most static analysis methods and tools take advantage of source code abstractions like buffers, functions, and user types to scale to large configurations, yet with commercial software one typically has access only to program binaries. While there is a wealth of research employing dynamic analysis on binary code (e.g., symbolic execution and dynamic taint analysis), scalable static security analysis has not received as much attention and lags behind. Third, previous static program analysis work was blind to the actual security of programs on the attack surface. For example, suppose system A runs a service where the implementation is verified correct and safe, but system B runs the same service implemented in an unsafe language by an inexperienced developer. Both would have the same attack surface measurements according to previous work, even though in reality the verified version has far less risk.

Finally, once a software system is vetted and approved, there remains an open window for an adversary to add backdoors and other malicious functionality to future instantiations within an enterprise. Software-based attestation (a notion invented at CMU) attempts to remedy this by allowing an analyst to attest that software running on a system is the approved version. Unfortunately, the vision of software-based attestation is far from being completely realized and current progress does not support remote verification in practical instances.