Carnegie Mellon’s Privacy Engineering Certificate program adds new focus on AI governance

Michael Cunningham

Nov 24, 2025

Decorative image

As artificial intelligence reshapes industries from finance to healthcare, organizations are increasingly seeking professionals who can protect personal data while ensuring that powerful AI systems are developed responsibly.

Carnegie Mellon University is addressing that demand by expanding its now five-week Certificate Program in Privacy Engineering and AI Governance, adding a dedicated module on AI governance beginning in February 2026.

The program, taught by the same faculty who lead Carnegie Mellon’s master’s program in Privacy Engineering, offers a condensed yet rigorous training experience for working professionals.

“We increasingly cover AI in nearly every module,” explained Norman Sadeh, co-director of the Privacy Engineering program. “But we felt there was a need to fill in some gaps and tie different issues together. The new module will do just that.”

Training in AI governance has become a meaningful differentiator for privacy students and professionals seeking career advancement. Sadeh notes that CMU master’s students who take his Responsible AI and AI Governance elective course frequently report that it has helped them stand out in the job market.

“There’s clearly a very big demand for that type of knowledge,” he said. “Even being able to say on your résumé that your training includes AI makes a difference.”

The new module will explore practical challenges that are increasingly common in real-world AI development: how personal data is collected and used to train models, the extent to which and how information can be removed from trained systems, fairness and bias concerns, AI/ML security, explainability, red teaming, regulatory requirements in the US and abroad, and the privacy impact of technologies such as deepfakes. Students will learn strategies for identifying, analyzing and mitigating these risks through combinations of organizational policies and processes, risk modeling and mitigation strategies as well as specific techniques. 

“Organizations are turning to privacy engineers for AI governance roles because they already understand how to analyze systems, policies, workflows, and risks,” said Sadeh. “The overlap between privacy and AI governance is significant, and it’s only growing.”

 

No one has a full understanding of all the issues.

Hana Habib, associate director, Carnegie Mellon University Privacy Engineering program

The certificate is structured for full-time professionals, with coursework delivered live online over weekends—three hours on Saturday and three hours on Sunday—supplemented by weekly assignments. Students consistently report that the interactive format, including group exercises and discussions, is one of the program’s biggest benefits.

“People appreciate interacting in real time with leaders in the field,” said Lorrie Cranor, the program’s co-director. “They also learn a great deal from one another, and they value the chance to build a professional network.”

In addition to the new AI governance module, the program features modules focused on topics ranging from usable privacy design, privacy threat modeling and the regulatory landscape to information security and algorithmic privacy.

Participants come from a wide range of backgrounds, including privacy officers, engineers, lawyers, consultants, and government specialists. Some are already established experts in privacy, while others are pivoting into the field and looking to strengthen their technical and legal understanding.

Requirements for the program are intentionally accessible, with only basic expectations around reading small amounts of code, understanding simple statistics, and communicating effectively in English. According to the program’s associate director, Hana Habib, what matters most is motivation.

 “No one has a full understanding of all the issues,” she said. "Even very senior professionals report that a significant percentage of what we cover is new and interesting.”

With the new module and updated program name, CMU is preparing professionals to lead in a world where privacy protection and AI risk management are inseparable.

“As AI technologies continue to evolve, it’s important for people working in privacy and AI governance to be able to identify, analyze, and mitigate the risks these systems produce,” said Sadeh. “Our goal is to train professionals who can do that thoughtfully, rigorously, and responsibly.”

For more information about the CMU Certificate Program in Privacy Engineering and AI Governance, or to inquire about individual or organizational enrollment, please contact the program.