Award-winning research paves the way for provably-safe sandboxing using WebAssembly

Ryan Noone

Oct 11, 2022

Jay Bosamiya works on his computer in the Gates Hillman building on Carnegie Mellon's Pittsburgh campus

Jay Bosamiya, a Ph.D. student in Carnegie Mellon's Computer Science Department (CSD), works on his laptop at the Gates Hillman Complex.

"This is code downloaded from the internet. Are you sure you want to run it?"

In today's computer programming landscape, developers often face the challenge of safely using untrusted code. Libraries and frameworks, for example, help coders skip large amounts of tedious and duplicative work, but using code from unverified sources can become hazardous without the right safeguards in place.

In the worst cases, untrusted code can deplete system resources, lead to data breaches, affect system integrity, and even create vulnerabilities that allow outsiders to use machines for illegal activity.

Over the past several years, Jay Bosamiya, a Ph.D. student in Carnegie Mellon's Computer Science Department (CSD), along with his advisor Bryan Parno, a professor in CSD and the Department of Electrical and Computer Engineering (ECE), have been working to find ways to eliminate the threats associated with untrusted code.

In their recent award-winning paper, “Provably-Safe Multilingual Software Sandboxing using WebAssembly,” Bosamiya, Parno, and Wen Shih Lim, a master’s student in the School of Computer Science, observed that WebAssembly (Wasm) is ideally positioned to safely execute untrusted code as it promises safety and performance while serving as a compiler target for many high-level languages. However, the group notes its promises are only as strong as its implementation.

Jay Bosamiya works on his computer in the Gates Hillman building on Carnegie Mellon's Pittsburgh campus

"Security-critical bugs are found regularly in various implementations”, says Bosamiya, “Writing a high-performance compiler is already hard, and compilers from Wasm need to protect even against adversarial inputs, which makes it even harder.” 

To address this issue, the group has developed two distinct approaches for safely executing Wasm code via provably-safe software sandboxing—a technique that confines the impact of any bugs or malice in the untrusted code, preventing it from harming code or data in its environment.

The first, vWasm, achieves provably-safe sandboxing by drawing on traditional formal methods to produce mathematical, machine-checked safety proofs. In contrast to traditional formally verified compilers that have focused on proving that correct input code is faithfully compiled to equivalent output code, vWasm ensures that all input code, regardless of correctness or even malice, is compiled to safely sandboxed code.

After a few years of working on vWasm, Bosamiya came up with an idea that changed the way he and Parno had been looking at the problem. Over the course of just a few weeks, he prototyped the first provably-safe sandboxing compiler with competitive run-time performance, rWasm.

Leveraging Rust, a systems programming language with a strong focus on performance, reliability and safety, rWasm turns low-level Wasm code into a higher-level safe Rust code, enabling safe sandboxing without the tedium of writing formal proofs.

“In the past, there has often seemed to be a tension between safety and performance,” says Bosamiya, “with rWasm, you no longer have to pick between the two.”

vWasm and rWasm are open-sourced, providing others the ability to use, inspect, and build upon each. Bosamiya hopes computer programmers and researchers worldwide will use them to improve the security of software systems over time.

"Provably-Safe Multilingual Software Sandboxing using WebAssembly" earned both a USENIX Distinguished Paper Award and the 2nd place prize in the 2022 Internet Defense Prize Competition at the 31st USENIX Security Symposium.