CyLab is launching a multi-year Secure and Private IoT Initiative with a singular, ambitious mission:

To create the knowledge and capabilities to build secure and privacy-respecting IoT systems. 

mission.png

We will develop a suite of novel foundations and technologies that address the following IoT challenges: scalability, speed and cost, safety and security, uptime and reliability, and privacy and compliance.  Our approach: build a network that can autonomously protect devices, create novel ways to bootstrap trust in devices (even when compromised), and built-in primitives to hold the network and devices accountable for data collection and dissemination.

Through this multi-year Initiative, CyLab intends to re-imagine an IoT world along the following three core concepts and underlying research themes:

1. Autonomous Healing

  • Observation: The network is the only security touchpoint for both new and existing devices.
  • Consequence: A new resilient and secure network-centric approach to autonomously close the loop, and dynamically customize the network’s security posture, is needed.
  • Initiative Objective: An IoT stack that leverages new advances in Software Defined Networking (SDN) to autonomously detect and react to security incidents at machine timescales that cannot be fundamentally done by humans-in-the-loop.

2. Trusted

  • Observation: IoT devices are deployed with limited or no user or management interfaces so it is difficult to ensure they remain secure over the lifecycle – from configuration, to resetting to a known state in the presence of malware, and ongoing updatability.
  • Consequence: Secure enforcement mechanisms for establishing a root of trust in a heterogeneous set of devices without requiring a user interaction are needed.
  • Initiative Objective: Trusted devices built on new advances in remote attestation pioneered at CyLab, as well as provably secure primitives for bootstrapping trust such as verified end-to-end systems stacks. 

3. Accountable

  • Observation: Massive data collection and machine learning is possible today, but there is no way to ensure that the learning obeys company policies and is not inadvertently biased or privacy violating.
  • Consequence: Explainable artificial intelligence and enforcement mechanisms are needed so that human security and privacy auditors know the learning respects policy and the human.
  • Initiative Objective: New accountable artificial intelligence primitives that can be built-in to ensure new devices, when added to the network, obey data collection and privacy policies.

Join us!

Is your company or organization interested in working with our researchers on IoT? Let us know by contacting Michael Lisanti.