Study further explores perceptions of video analytics

Daniel Tkacik

Aug 19, 2021

Smile—you’re on camera! A report by HIS Markit says that there are an estimated one billion surveillance cameras around the world. Some are analyzing your facial expressions, while others are identifying you and tracking where you go.

In general, people are somewhat aware of these technologies and tend to have mixed feelings about them, according to new research out of Carnegie Mellon University’s CyLab. The new study was presented at last week’s Symposium On Usable Privacy and Security.

“It’s invasive and big brother esque,” one study participant said during a study interview. “It can provide good information for law enforcement but is easily abusable.”

This general finding may not seem terribly surprising—after all, movies like Minority Report portray facial recognition as both a useful law enforcement tool but also one that creeps on our private lives—but CyLab’s Aerin Zhang, a Ph.D. student in the Language Technologies Institute who led the study, hoped to provide an in-depth qualitative analysis of these perceptions.

“The first step to address data privacy challenges of video analytics is to establish a baseline of privacy norms by understanding people's opinions and attitudes towards the technology,” Zhang says.

The study authors believe that their study is the first to present real-life facial recognition scenarios to study participants in situ—that’s science-speak for, while pursuing their regular everyday activities.

The researchers conducted a study that captured participants’ attitudes towards varying facial recognition scenarios over a 10-day period. Participants were instructed to download an app designed specifically for the study, and then to go about their normal daily lives.

Facial data are rather sensitive if they are collected without permission—without consent.

Shikun "Aerin" Zhang, Ph.D. student, Language Technologies Institute

When the app’s GPS feature detected that a participant was in a place for which the researchers had plausible scenarios (e.g., a gym, a government building, a coffee shop, etc.), the app would send them a push notification to fill out a short survey about a facial recognition scenario pertaining to that location. At the end of each day, participants would answer a summary survey about all the scenarios they were presented with that day.

All scenarios used were informed by an extensive survey of news articles about real-world deployments of facial recognition. For example, if a participant attended a church, they might have been presented with a scenario in which facial recognition was used to track their attendance. (Yes, that really happened.)

“Facial data are rather sensitive if they are collected without permission—without consent,” says Zhang. “People feel really strongly about that.”

Most participants reported that facial recognition could be used to benefit security—identifying criminals’ identities or locating missing children and adults—as well as authentication—that facial recognition could replace IDs and confirm identity.

But most participants also raised concerns about various purposes for which facial recognition is being used, such as for advertisements. For example, many participants were concerned about facial recognition used for profiling— “… using it to profile someone based on race or gender,” one participant wrote.

“And gender, there is such a spectrum, just because you’re female, that doesn’t mean you are going to wanna wear makeup or buy pretty dresses,” one participant said. “Same thing for guys. I just think lumping every person into a classification is over-generalized; you are going to miss people.”

Others voiced concern about not knowing for what reason their facial data are being used. 

“Businesses should be more transparent about the purpose for which they are measuring video analytics,” says Zhang.

Businesses should be more transparent about the purpose for which they are measuring video analytics.

Shikun "Aerin" Zhang, Ph.D. student, Language Technologies Institute

Ultimately, participants want transparency and control over the collection of their data, the study finds, but that the existing mechanisms to effectively notify people about data collection are inadequate.

“People feel strongly about video analytics, and right now there’s no way for people to know what’s being done with that data, and there’s no way for people to opt-out of that data collection,” says Zhang. “One possible solution to that is the IoT assistant.”

The IoT Assistant is an app and entire infrastructure for people to share the locations of IoT devices so that others are made aware of them, what data they collect, and how to opt out of data collection if the option exists. The app is installed on users' smartphones, and users may set up alerts to inform them about data collection practices occurring around them.

The IoT Assistant is part of the Personalized Privacy Assistant Project, an effort led by CyLab’s Norman Sadeh, a professor in the Institute for Software Research at CMU and a co-author of this study.

Paper reference

Facial Recognition: Understanding Privacy Concerns and Attitudes Across Increasingly Diverse Deployment Scenarios

  • Shikun “Aerin” Zhang, Carnegie Mellon University (CMU)
  • Yuanyuan Feng, CMU
  • Norman Sadeh, CMU