2021 Facebook leak


In 2021, an internal document leak from the company then known as Facebook showed it was aware of harmful societal effects from its platforms, yet persisted in prioritizing profit over addressing these harms. The leak, released by whistleblower Frances Haugen, resulted in reporting from The Wall Street Journal in September, as The Facebook Files series, as well as the Facebook Papers, by a consortium of news outlets the next month.
Primarily, the reports revealed that, based on internally-commissioned studies, the company was fully aware of negative impacts on teenage users of Instagram, and the contribution of Facebook activity to violence in developing countries. Other takeaways of the leak include the impact of the company's platforms on spreading false information, and Facebook's policy of promoting inflammatory posts.
Furthermore, Facebook was fully aware that harmful content was being pushed through Facebook algorithms reaching young users. The types of content included posts promoting anorexia nervosa and self-harm photos.
In October 2021, Whistleblower Aid filed eight anonymous whistleblower complaints with the U.S. Securities and Exchange Commission on behalf of Haugen alleging securities fraud by the company, after Haugen leaked the company documents the previous month. After publicly revealing her identity on 60 Minutes, Haugen testified before the U.S. Senate Commerce Subcommittee on Consumer Protection, Product Safety, and Data Security about the content of the leaked documents and the complaints. After the company renamed itself as Meta Platforms, Whistleblower Aid filed two additional securities fraud complaints with the SEC against the company on behalf of Haugen in February 2022.
In response to the media fallout, Facebook executives went on press tours to express Facebook's position amidst the frenzy. Facebook also did internal damage control with employees through in person sessions and memos. They went on to do a rebranding and changed their logo as well as their name to Meta.

Background

In mid September 2021, The Wall Street Journal began publishing articles on Facebook based on internal documents from unknown provenance. Revelations included reporting of special allowances on posts from high-profile users, subdued responses to flagged information on human traffickers and drug cartels, a shareholder lawsuit concerning the cost of Facebook CEO Mark Zuckerberg's personal liability protection in resolving the Cambridge Analytica data scandal, an initiative to increase pro-Facebook news within user news feeds, and internal knowledge of how Instagram exacerbated negative self-image in surveyed teenage girls.
Siva Vaidhyanathan wrote for The Guardian that the documents were from a team at Facebook "devoted to social science and data analytics that is supposed to help the company's leaders understand the consequences of their policies and technological designs." Casey Newton of The Verge wrote that it is the company's biggest challenge since its Cambridge Analytica data scandal.
The leaked documents include internal research from Facebook that studied the impact of Instagram on teenage mental health. Although Facebook claimed earlier that its rules applies equally to everyone on the platform, internal documents shared with The Wall Street Journal point to special policy exceptions reserved for VIP users, including celebrities and politicians. After this reporting, Facebook's oversight board said it would review the system.
On October 3, 2021, the former Facebook employee behind the leak, Frances Haugen, revealed her identity on a 60 Minutes interview where she detailed the harm Facebook knowingly allowed on their platform. In her interview, she explains that her friend falling subject to propaganda is what pushed her to be so vocal about the misdoings of Facebook.

The reports

Beginning October 22, a group of news outlets began publishing articles based on documents provided by Haugen's lawyers, collectively referred to as The Facebook Papers. These articles detailed the various crimes Facebook was complicit in.

2020 U.S. elections and January 6 U.S. Capitol attack

The New York Times pointed to internal discussions where employees raised concerns that Facebook was spreading content about the QAnon conspiracy theory more than a year before the 2020 United States elections. After the election, a data scientist mentioned in an internal note that 10 percent of all U.S. views of political content were of posts alleging that the election was fraudulent. Among the ten anonymous whistleblower complaints Whistleblower Aid filed with the SEC on behalf of Haugen, one complaint alleged that Facebook misled the company's investors and the general public about its role in perpetuating misinformation related to the 2020 elections and political extremism that caused the January 6 United States Capitol attack. Haugen was employed at Facebook from June 2019 until May 2021, starting within the company's Civic Integrity Team that was focused on investigating and addressing worldwide elections issues on the platform, as well as how the platform could be used to spread political disinformation and misinformation, to incite violence, and be abused by malicious governments until the company dissolved the team in December 2020.
In the weeks after the 2020 U.S. presidential election, Facebook began rolling back many content policy enforcement measures it had in place during the election despite internal company tracking data showing a rise in policy-violating content on the platform, while Donald Trump's Facebook account had been whitelisted in the company's XCheck program. Another of the whistleblower complaints Haugen filed with the SEC alleged that the company misled investors and the general public about enforcement of its terms of service due to such whitelisting under the XCheck program. Haugen was interviewed by videoconference by the U.S. House Select Committee on the January 6 Attack in November 2021 about her tenure at Facebook, the company documents she provided to Congress, the company's corporate structure, and her testimony before Congress the previous month, but none of the information she provided to the Committee was included in its final report.

Instagram's effects on teenagers

The Files show that Facebook had been conducting internal research of how Instagram affects young users since 2018. While the findings point to Instagram being harmful to a large portion of young users, teenage girls were among the most harmed. Researchers within the company reported that "we make body issues worse for one in three teenage girls". Furthermore, internal research revealed that teen boys were also affected by negative social comparison, citing 14% of boys in the US in 2019. Instagram was concluded to contribute to problems more specific to its app use, such as social comparison among teens. Facebook published some of its internal research on September 29, 2021, saying these reports mischaracterized the purpose and results of its research.

Studying of preteens

The Files show that Facebook formed a team to study preteens, set a three year goal to create more products for this demographic, and commissioned strategy papers about the long-term business prospects of attracting the preteen demographic. Some research Facebook has done includes studies on tween usage of social media apps and parent responses. Federal privacy laws restrict data collection on children under 13 years old. Internal documents from April 2021 showed plans to make apps targeting children from ages 6 to 17, by September, the head of Instagram announced the halting of development of those apps. A 2020 document from Facebook states: "Why do we care about tweens?" and answers that question by saying that "They are a valuable but untapped audience."

Violence in developing countries

An internal memo seen by The Washington Post revealed that Facebook has been aware of hate speech and calls for violence against groups like Muslims and Kashmiris, including posts of photos of piles of dead Kashmiri bodies with glorifying captions on its platform in India. Still, none of their publishers were blocked. Documents reveal Facebook has responded to these incidents by removing posts which violate their policy, but has not made any substantial efforts to prevent repeat offenses. As 90% of monthly Facebook users are now located outside of the US and Canada, Facebook claims language barriers are one obstacle that is preventing widespread reform.

Promoting anger-provoking posts

In 2015, in addition to the Like button on posts, Facebook introduced a set of other emotional reaction options: love, haha, yay, wow, sad and angry. The Washington Post reported that for three years, Facebook's algorithms promoted posts receiving the new reactions from its users; giving them a score five times that of traditional likes. Years later, Facebook's researchers pointed out that posts with 'angry' reactions were much more likely to be toxic, polarizing, fake or low quality. Ignoring frequent internal calls, the company did not differentiate the 'angry' reaction from other reactions until September 2019, when its value was cut to zero. There have been other cases when Facebook prioritized new features it wanted to promote, despite this turning out to be promoting toxic or radicalizing material.
In 2018, Facebook overhauled its News Feed algorithm, implementing a new algorithm which favored "Meaningful Social Interactions" or "MSI". The new algorithm increased the weight of reshared material - a move which aimed to "reverse the decline in comments and encourage more original posting". While the algorithm was successful in its efforts, consequences such as user reports of feed quality decreasing along with increased anger on the site were observed. Leaked documents reveal that employees presented several potential changes to fix some of the highlighted issues with their algorithm. However, documents claim Mark Zuckerberg denied the proposed changes due to his worry that they might cause fewer users to engage with Facebook. Documents have also pointed to another 2019 study conducted by Facebook where a fake account based in India was created and studied to see what type of content it was presented and interacted with. Results of the study showed that within three weeks, the fake account's newsfeed was being presented pornography and "filled with polarizing and graphic content, hate speech and misinformation", according to an internal company report.