2024
October
Carley discusses how misinformation can spread
Associated Press
CyLab’s Kathleen Carley was quoted by the Associated Press about the dangers of misinformation, particularly pertaining to false claims of election fraud and how that may impact overseas voters. “Laying the groundwork for a conspiracy theory means that you need to weave many claims together,” Carley explained. “In that sense, this story about UOCAVA lays the groundwork for, and would help substantiate, a conspiracy theory around Democrats stealing the election.”
September
Brumley discusses social security number breach and cybersecurity
Time
ECE’s David Brumley spoke with Time about the hack in April that resulted in SSN data from the NPD being leaked. “We are not talking about a startup here,” Brumley said. “Looking forward, we have to have higher standards for the custodians of our data.”
August
Fanti and Sowon suggest improving the way governments regulate technology
Atlantic Council
ECE/CyLab’s Giulia Fanti and CyLab’s Karen Sowon were among the co-authors of an article in Atlantic Council suggesting ways in which governments could more effectively regulate new technologies while avoiding setbacks. They detail efforts of governments around the world that have had unintended consequences affecting millions of people, and argue that these consequences may have been avoided had policymakers had approached the problems like products designers do, and balanced security, privacy, and usability.
July
Carley discusses new tools designed to curb the spread of disinformation online
Pittsburgh Post-Gazette
CyLab/EPP’s Kathleen Carley spoke with the Pittsburgh Post-Gazette about the need to improve detection of inaccurate or misleading content online. In recent years, digital spaces have seen a significant increase in the spread of disinformation and the use of deepfake technology. New detection tools such as Trustnet and MUSE aim to help users spot inaccurate information, but the process still depends on the trustworthiness of intermediate messengers. “There has been an increased use of deepfakes and disinformation in every election this year around the world,” Carley said. “It’s not only a U.S. problem.”
May
Lisanti listed as an innovative connector for tech in the Pittsburgh region
Technical.ly
CyLab’s Michael Lisanti has been recognized as one of Pittsburgh’s 20 RealList Connectors—people who bring together innovators, educators, and locals to help them make personal and professional connections. Lisanti was recognized for his connections between tech companies and the University in his role as director of partnerships for CyLab.
April
Cranor quoted on “broadband nutrition labels” for internet service
Marketplace
CyLab Director Lorrie Cranor was quoted on the introduction of “broadband nutrition labels” for internet service. The FCC now requires large internet service providers to post a snapshot indicating what users are paying for and what they're getting, mimicking nutrition labels on food products. Cranor notes that users may still struggle to understand these labels. “Don’t just give me numbers, tell me how good is this? If I want to play video games, if I want to stream movies,” Cranor said.
Carley discusses the rise of “pink slime” websites
Financial Times
CyLab/EPP’s Kathleen Carley discusses the findings of her research on the rise of “pink slime” websites. These sites look similar to legitimate local news outlets, but are heavily tied to a dark network of lobbying groups and political operatives, pushing highly partisan stories on as many platforms as they can reach. Carley recently found that pink slimes have been receiving more and more money ever since the 2022 midterms. “A lot of these sites have had makeovers and look more realistic,” Carley explained to Financial Times. “I think we’ll be seeing a lot more of that moving forward.”
March
Carley warns about online disinformation, its role in public health crises, and its sources
Forbes
CyLab/EPP’s Kathleen Carley recently led a study investigating the origins of conspiratorial tweets, and she spoke with Forbes about the study’s findings: 82% of the more than 200 million tweets analyzed were driven by bot activity, which resembles possible state-sponsored disinformation. She says: “We do know that it looks like it’s a propaganda machine, and it definitely matches the Russian and Chinese playbooks, but it would take a tremendous amount of resources to substantiate that.” This disinformation sharpens political polarization and jeopardizes trust in public health institutions, correlating with a drop in vaccination rates and rise in cases of diseases such as measles.
Cranor speaks on internet frauds in TribLive
TribLive
CyLab Director Lorrie Cranor speaks on internet frauds in TribLive. With a rising amount of financial scammers affecting people everywhere of all ages or education level, Cranor warns readers of the numerous types of scams and tricks people will use to steal money or personal data. “If you get any message through any channel that says you should transfer money or buy gift cards, it’s a scam,” she says.
Cranor discusses cybersecurity labels for smart devices
News 5 Cleveland
CyLab Director Lorrie Cranor discusses the idea of creating cybersecurity labels for smart devices. The Federal Communications Commission (FCC) recently approved a new labeling system in which smart devices proven safe by accredited labs would be labeled with a Cyber Trust Mark, similar to the way an Energy Star logo indicates energy efficiency. “By having these labels, the hope is that it will kind of raise the bar because companies are going to be upfront about this,” Cranor tells News 5 Cleveland. “And you know they’re not going to want to look bad. So, they’re going to have some incentive to actually improve their security and privacy.”
February
Pasareanu quoted on safety of driverless cars
Quanta Magazine
CyLab/ECE’s Corina Pasareanu was quoted in Quanta Magazine regarding recent efforts to guarantee the reliability of perception systems in autonomous vehicles. A team of researchers at the University of Illinois, Urbana-Champaign aims to mathematically prove the reliability of these systems by quantifying the margin of error and ensuring that margin remains within a safe limit. “Their method of providing end-to-end safety guarantees is very important,” Pasareanu said.
Carley speaks about AI robocalls following New Hampshire primary election
AP News
CyLab/EPP’s Kathleen Carley was quoted in AP News on the use of AI-generated voices in robocalls. In particular, she discussed the recent robocalls that used an AI-generated likeness of President Joe Biden’s voice to discourage people from voting in the New Hampshire primary election. The technology to mimic human voices is “well understood and it makes standard mistakes,” she said. “But that technology will get better.” Her comments on the matter were also quoted in Mashable and The Hill.
Carley speaks about AI technology’s role in the upcoming election
Computer World
CyLab/EPP’s Kathleen Carley speaks to Computer World about AI technology’s role in the upcoming election. With the vast capabilities of AI, she advises social media companies especially to take precautions against AI-generated content while preserving the discourse surrounding the election. “AI technologies are constantly evolving, and new safeguards are needed,” Carley said. “Also, AI could be used to help by identification of those spreading hate, identification of hate-speech, and by creating content that aids with voter education and critical thinking.”
Cranor speaks about Roblox privacy concerns
USA Today
CyLab Director Lorrie Cranor spoke with USA Today about the spotlight cast on Roblox’s privacy settings. While it was established that the online game doesn’t collect precise location data, Cranor noted that sharing personal information in chats could leave users vulnerable. She recommended that parents check the privacy settings on the platform for younger users.
2023
December
Cranor comments on House speakers’ porn-monitoring software
The Washington Post
CyLab Director Lorrie Cranor comments on House speaker Mike Johnson’s porn-monitoring software in The Washington Post. The software scans all activity on any device and sends a report to an “accountability partner—Johnson’s is his 17-year-old son. Cranor is “concerned about a government official using it knowingly on his own devices, as it may expose potentially sensitive information to a third-party service provider or even his 17-year-old son.”
Cranor shares thoughts on smartphones and their listening capabilities
WTAE
CyLab Director Lorrie Cranor shares her thoughts on smartphones and their listening capabilities on WTAE. “Your phones are mostly not listening to you, but they could be listening to you,” she says. Her recommendation is to check which apps on your phone have access to the microphone, which is how phones could be listening to you.
Hibshi gives input on Municipal Water Authority of Aliquippa cyberattack
WTAE-TV
CyLab/INI’s Hanan Hibshi gives input on the Municipal Water Authority of Aliquippa cyberattack in WTAE-TV. “We’re going to see more attacks, and I think that, unfortunately, lots of parties that are not involved in the conflict can get affected,” Hibshi says with regards to the Israel-Hamas war.
Carley comments on Amazon Lex
InfoWorld
CyLab/EPP’s Kathleen Carley comments on Amazon’s chatbot, Lex, in InfoWorld. “The key is that putting a large language model into Lex means that if you build or interact with an Amazon Lex bot, it will be able to provide more helpful, more natural human-sounding, and possibly more accurate responses to standard questions,” Carley says. “Unlike the old style analytic system, these bots are not task focused and so can do things other than follow a few preprogrammed steps.”
November
Brumley talks about Biden’s plan for ethical hacking
Axios
ECE’s David Brumley spoke to Axios about Biden’s plan for ethical hacking for AI safety. The president signed an executive order that will allow ethical hackers to find flaws in artificial intelligence tools allowing a safe way to test new AI algorithms. However, Brumley told Axios that “companies and policymakers need to shift their attention to the algorithms and data sources at the heart of the models, rather than the outputs.”
Brumley gives input on recent executive order addressing AI security risks
CyberScoop
ECE’s David Brumley gives his input on the recent executive order from the White House that addresses AI security risks in CyberScoop. He expresses concern that the EO is directing safety measures to agencies that may not have the experience or capacity to carry them out. “They’re relying on very traditional government agencies like NIST that have no expertise in this,” he says.
Fredrikson comments on the manipulability of new AI chatbots
The Washington Post
CyLab’s Matt Fredrikson comments on the manipulability of new AI chatbots. These programs use large language models that can easily be taken advantage of by attackers. “One approach is to limit the instructions these models can accept, as well as the data they can access,” Fredrikson says. “Another is to try to teach the models to recognize malicious prompts or avoid certain tasks. Either way, the onus is on AI companies to keep users safe, or at least clearly disclose the risks.”
October
Agarwal featured in Nature article about campus surveillance
Nature
CyLab’s Yuvraj Agarwal was featured in a Nature article about campus surveillance. While many protest the use of sensors in classrooms and work environments, Agarwal contends that the goal is to develop smart buildings without violating occupants’ privacy. “For years, buildings have had sensors that are used to control the lights and the heating,” he says. “It’s the bread and butter of building operations. With these types of sensor, there is no visibility on where they are and what they do—but our system has been designed with privacy and security attributes from the start. We give users complete control, so they can decide from the start what they want the sensors to do.”