Are the recent Facebook revelations a “privacy Chernobyl?”
Scottie Barsotti
Nov 9, 2021
In dramatic recent testimony and interviews, data scientist Frances Haugen—who has come to be known in the media as the “Facebook Whistleblower”—has brought to light internal research that shows Facebook was aware of potential societal harms related to misinformation, political extremism, hate speech, and mental health (especially among teenagers) caused by issues with how content is created, distributed, and monetized on its platforms.
CyLab’s Alessandro Acquisti is too modest to say “We told you so.” But in a paper Acquisti published in 2005 with CMU colleague Ralph Gross—the very first peer-reviewed article studying what, at the time, was a brand new but rapidly growing online social network—he outlined the privacy concerns that Facebook arose. In lectures and presentations Gross and Acquisti started giving at the time, they described Facebook as a “privacy disaster waiting to happen.”
“Even then, there were significant concerns specific to privacy—in terms of how Facebook could collect data, and what they could then do with that data,” said Acquisti, a professor of information systems and public policy in Heinz College who pioneered the field of privacy economics. “Turns out those concerns were justified. If anything, Facebook privacy issues are even worse, in fact, than what we could have predicted 16 years ago.”
The question is: Will any of this matter? Will this be a “Big Tobacco” moment for Big Tech, with meaningful responses from lawmakers and/or consumers, or will we see a continuation of the status quo?
We caught up with Professor Acquisti to get his take.
This interview has been edited for length and clarity.
Heinz College: What do the recent revelations by Frances Haugen mean to consumers on Facebook’s family of apps?
Alessandro Acquisti: As important as the recent revelations are, they simply confirm what research on Facebook and social media has been telling us for quite a while. Which is that, together with the undeniable value social media can create in terms of allowing communication exchanges and interactions between people, social media has also caused much harm. The recent revelations focus on specific harms to teenagers, though the harm is not just to teenagers, but entire vulnerable populations. What is happening now is akin to a slowly growing turmoil. This bubbling of the water is reaching a point where we can no longer afford not to pay attention to it. We can no longer pretend it’s not important.
HC: Do you think this is a “Big Tobacco” moment for Big Tech that will change public attitudes and spark tighter regulation?
AA: It is that kind of moment—but we have seen quite a few of these kinds of moments over the past few years. Early in my academic career in privacy, I would go to conferences, and people would talk about the upcoming “Privacy Chernobyl,” that is, a catastrophic moment of privacy self-realization that would finally sensitize consumers and change their mood and stance toward the amount of data that internet companies are able to collect.
Well, you know what? Since I started hearing that expression, I’d estimate there have now been a dozen or more Privacy Chernobyls—from the Edward Snowden revelations, to the alleged use of Facebook users’ personal data in the 2016 U.S. presidential elections, to larger and larger data breaches over time, and now the discovery that Facebook knew from internal research about the potential harms from use of its platform, and did not do anything about it.
Are we at a tipping point, now? Honestly, after so many tipping points in the past not changing anything, I’d hesitate to say that things will change now. But we cannot afford to ignore what is happening, as social scientists and policymakers. Hopefully, this gives regulators the impetus to do something.
Privacy is, in theory, a bipartisan issue (unlike climate change or immigration and other issues that can drive more ideological responses). In theory, this is an area where bipartisan agreement could finally bring about change. The problem is that the power of data companies in Washington and Brussels is significant. It’s not clear the extent to which well-intended regulatory initiatives can survive the lobbying process, and if privacy laws will end up being passed in a form that truly reflects the original intent and desire of offering people a better way to manage their privacy online.
We know that the harm from privacy violations can be huge.
Alessandro Acquisti, professor, Heinz College
HC: What are some of the privacy concerns that these recent revelations have brought to light?
AA: In economic terms, we call these ramifications “privacy externalities”—that is, the collective societal ramifications of individual disclosures. There are risks to individuals when their data is collected, like teenagers who may be bombarded with information that is not beneficial or may even be harmful to them. However, there are also these larger societal harms arising from data-driven personalization algorithms: filter bubbles, amplification of disinformation, and political polarization. These all have downstream negative effects on democratic elections, and in the worst possible scenarios may lead to violence.
If you look at the genocide in Myanmar, a report commissioned by Facebook connected the collection of data on Facebook to data personalization algorithms creating echo chambers which eventually lead to physical violence—in the real world! In the space of 15 years we’ve gone from privacy being a problem for individual consumers navigating their own comfort with companies knowing their information, to this thin red line connecting the collection of personal data to genocide. Now we know that the harm from privacy violations can be huge.
HC: If regulators demand Facebook turn over algorithms, data, or some other proprietary material, does that create any additional privacy concerns?
AA: That is not the case. I believe this is too often used as an excuse not to allow regulators to cast light on tech companies and their algorithmic black boxes. We have tools and methodologies that allow us to conduct auditing in a privacy preserving manner. I doubt, in fact, that regulators would ask for Facebook to pass fully identified data of their users to an agency with no supervision.
What would more likely happen is regulators applying various processes and tools of de-identification of the data, such as differential privacy, which allow manipulation of datasets to protect individual data while allowing analytics to be applied. Additionally, there are ways for algorithms to be audited in a privacy preserving manner as well.
HC: Do you believe large technology and data companies should be regulated?
AA: Yes. Economists who work in this field, by and large, tend to look at the direct intervention of policymakers into economic transactions and market interactions involving the so-called “data economy” with some degree of suspicion. But I believe that the enormous concentration of data we observe in the market, and the equally enormous concentration of power within the biggest players, almost begs for some degree of oversight. That could take different forms. We could have stricter auditing to make algorithmic black boxes less opaque. We could pursue antitrust initiatives to decrease the monopoly or oligopoly power that these entities have and how they use that power to squelch competition.
It could also take the form of stricter privacy regulations, giving more power to individuals over how their data is collected, used, and transferred to others. The General Data Protection Regulation (GDPR) in the EU is an example of this.
HC: What recommendations do you have for policymakers?
AA: I would advocate for the following:
- Technology in the area of data in the last few years has created lots of opportunities, but also a lot of risk. The opportunities could outweigh the risk if we allow and incentivize the design and deployment of privacy enhancing technologies and privacy preserving algorithms. What we have now that we didn’t have 20 years ago are technologies such as differential privacy and machine learning techniques like federated learning, which can allow socially beneficial analytics to continue while protecting individual data much more so than we do now.
- We need to rethink the very policy and infrastructure of the Internet to make privacy an ingrained feature, by default. This could be done without significant damage to flows of data.
- We cannot rely on market forces by themselves to achieve the level of deployment of privacy technologies we need in this area, and individual players may not have the incentives to take action on their own. Adoption and deployment of these privacy enhancing technologies may not be seen as necessary or valuable by entities that collect data, so policymakers must nudge these players in the right direction by creating incentives.
- Legal scholars can determine whether oversight and regulation should be done legislatively or through executive branch agencies. No matter the legal mechanisms, regulatory solutions have to come from the federal level, not the state level. Relying on states to create their own online privacy laws would leave us at square one, and create an environment that is chaotic for users and for companies.
HC: Why should Heinz College students care about this issue?
AA: I believe these issues should be central to our students—and for many it is. As users of these services and citizens of the world, they have a stake in how social media affects our society and their world. And as students, there are few scenarios that are more emblematic of the importance of being able to understand the overlaps of technology, data, and policy than what we’re discussing here.