CyLab students bring responsible AI lessons to high school and middle school classrooms

Michael Cunningham

Apr 1, 2026

Photo of a Carnegie Mellon student looking at a laptop computer with high school educators that the student is instructing

CMU student Dani Wicklund recently joined the Leadership Development Institute & Counselors for Computing Program Workshop at the Pittsburgh Technology Council to lead an AI Security in Education session for 21 national trainer-educators.

As artificial intelligence becomes an everyday tool for students, a new challenge is emerging in classrooms: how to use these powerful systems responsibly.

At Carnegie Mellon University, students are stepping forward to help meet that challenge, bringing lessons on AI literacy and ethics directly into middle and high school environments.

Through CyLab’s outreach efforts, student ambassadors from CMU are working with learners and educators to build a deeper understanding of how AI systems function and where their limits lie. The initiative builds on the university’s long-running cybersecurity education program, picoCTF, expanding its scope to include responsible development and use of emerging technologies like artificial intelligence and blockchain.

Over the past academic year, picoCTF student ambassadors have led regular classroom sessions at Gateway High School, collaborated with students at Community Forge in Wilkinsburg, and supported teacher professional development workshops hosted on CMU’s campus. Additional outreach events throughout spring 2026 have extended the program’s reach even further, connecting with both students and educators navigating the rapid rise of AI tools.

The program’s approach is grounded in peer-to-peer learning. CMU students currently studying cybersecurity, artificial intelligence, and related fields bring both technical expertise and real-world perspective into the classroom. Their goal is not just to demonstrate what AI can do, but to help learners understand how it works and how to question it.

That shift in mindset is central to the program’s mission. Rather than treating AI systems as opaque or authoritative, students are encouraged to interrogate them: to ask how outputs are generated, why responses can sound convincing but still be wrong, and what risks come with over-reliance on automated tools.

Photo of a Carnegie Mellon student presenting a slide presentation on responsible AI to a classroom of high school and middle school educators

CMU student Dante Cannestra gives a presentation to AI security educators at the Pittsburgh Technology Council's Leadership Development Institute & Counselors for Computing Program Workshop.

“Students are interacting with AI systems every day, often without a clear understanding of what’s happening behind the scenes,” said Hanan Hibshi, an Information Networking Institute faculty member, CyLab core faculty member, and PI for Cybersecurity K-12 teachers outreach initiatives at CyLab. “Our goal is to demystify these technologies, so students can recognize their strengths, question their outputs, and make informed decisions about how to use them.”

In classroom sessions, ambassadors guide learners through hands-on exercises that reveal how AI systems can fail or be misused. Students might test how models respond to ambiguous prompts, observe how bias can appear in generated content, or explore how easily outputs can be manipulated.

“We’ve designed these experiences to remind users that AI is a partner, not a primary source. It is an incredible productivity tool, but the rule remains: never trust, always verify,” said Logan O’Brien, a CMU Master of Science in Information Security (MSIS) student.

The initiative also highlights connections between AI and more traditional cybersecurity concepts. By drawing parallels between vulnerabilities in software systems and risks in AI models, ambassadors help students see that principles like verification, threat modeling, and responsible design apply across technologies.

For many participants, the experience marks a noticeable shift. Students who once accepted AI-generated answers at face value begin to question them, compare sources, and seek validation.

Photo of two Carnegie Mellon students posing at Gateway High School before teaching responsible AI strategies to high school students

From left: CMU students Dani Wicklund and Logan O'Brien recently visited the AP Computer Science class at Gateway High School to guide students through the “Inspector” web exploitation challenge, where they inspected web code to find clues and got a glimpse of where a passion for cybersecurity can take them.

“The focus moves from simply completing tasks to understanding the reliability of the information they are using,” said Dani Wicklund, a Carnegie Mellon M.S. in Artificial Intelligence Engineering - Information Security (MSIAE-IS) student.

Educators, too, are finding value in the program. As AI tools become increasingly present in classrooms, teachers are looking for ways to integrate them thoughtfully while addressing concerns about accuracy, misuse, and academic integrity. Through workshops and in-class collaboration, CMU ambassadors provide accessible explanations and practical strategies for guiding students.

“Teachers are being asked to adapt quickly to technologies that are evolving even faster,” said Hibshi. “By working directly with educators, we can help bridge that gap and provide tools that make responsible AI education both practical and sustainable.”

These efforts are only possible because of funding from the GenCyber Outreach initiatives and NCAE-C Outreach and Capacity Building program, which encourages educators in higher education to disseminate new pedagogical insights to teachers in K-12 so they can be equipped to upgrade their teaching in schools. The U.S. National Science Foundation’s Security, Privacy, and Trust in Cyberspace (SaTC) program also provided financial support.

By empowering its own students to teach and mentor, CMU is extending its impact far beyond campus. The ambassador model creates a multiplier effect, bringing expertise in cybersecurity and emerging technologies into classrooms where it can shape how the next generation understands and interacts with the digital world.

As AI continues to evolve, efforts like these are helping ensure that students are not just users of technology, but informed participants in shaping its future.

“We’re not just introducing students to new technologies,” said Megan Kearns, picoCTF program director. “We’re helping them build the mindset to engage with those technologies thoughtfully, securely, and responsibly as they continue to evolve.”