How to Fix Facebook, According to Its Own Employees

1 month ago 52

A idiosyncratic    with a wrench.

Illustration: Elena Lacey; Getty Images

The Facebook Papers

Thousands of interior documents amusement the world’s biggest societal media institution often ignored warnings astir its deepest problems.

In December 2019, arsenic Facebook was bracing for the looming chaos of the 2020 election, a station appeared connected its interior treatment site. “We are liable for viral content,” the rubric declared. The writer walked done the ways successful which Facebook’s algorithmic design helps low-quality contented spell viral, concluding with immoderate recommendations. Among them: “Rather than optimizing for engagement and past trying to region atrocious experiences, we should optimize much precisely for bully experiences.”

That mightiness dependable obvious—optimize for bully experiences. And yet Facebook’s disinterest successful doing that is simply a persistent taxable successful The Facebook Papers, interior documents revealed by Frances Haugen, the erstwhile employee-turned-whistleblower who recently testified earlier Congress. The files, archetypal reported connected by the Wall Street Journal, were included successful disclosures made to the Securities and Exchange Commission by Haugen and provided to Congress successful redacted signifier by her ineligible counsel. The redacted versions were reviewed by a consortium of quality organizations, including WIRED.

They uncover Facebook’s ain employees agonizing implicit the information that, successful their view, its cardinal algorithms reward outrage, hatred, and viral clickbait, portion its content moderation systems are profoundly inadequate. The documents are besides afloat of thoughtful suggestions for however to close those flaws. Which means determination is bully quality for Facebook and Mark Zuckerberg successful the files, if they take to spot it: a blueprint for however to hole immoderate of the company’s biggest problems.

Try Making Your Products Good

Quite a fewer Facebook employees look to hold that the institution has failed to prosecute immoderate affirmative worth too idiosyncratic engagement. Sometimes this is framed explicitly, arsenic successful a papers published successful 2020 with the rubric “When User-Engagement ≠ User-Value.” After explaining wherefore keeping users glued to Facebook oregon Instagram isn’t ever bully for them, the writer considers imaginable solutions. “A beardown prime civilization astir apt helps,” they conclude, successful what reads arsenic adust understatement. The writer goes connected to mention the illustration of WhatsApp—which Facebook acquired successful 2014—as a institution that built a palmy platform not by investigating features to optimize for engagement but by making “all their merchandise decisions conscionable based connected their perceptions of idiosyncratic quality.”

In different files, researchers lone indirectly admit however small attraction institution enactment pays to factors too engagement erstwhile making merchandise changes. It’s treated arsenic truthful evident a information that it doesn’t necessitate explanation—not conscionable by the authors, but successful the extended discussions with chap employees that travel successful the comments section. In a treatment thread connected 1 2019 interior post, idiosyncratic suggests that “if a merchandise change, whether it’s promoting virality, oregon expanding personalization, oregon immoderate else, increases the terrible harms we’re capable to measurement (known misinfo, predicted hate, etc.), we should deliberation doubly astir whether that’s really a bully alteration to make.” In different 2019 post, a researcher describes an experimentation successful which Facebook’s recommendations sent a dummy relationship successful India “into a oversea of polarizing, nationalistic messages,” including graphic unit and photos of dormant bodies. The writer wonders, “Would it beryllium invaluable for merchandise teams to prosecute successful thing similar an ‘integrity review’ successful merchandise launches (eg deliberation of each the worst/most apt antagonistic impacts that could effect from caller products/features and mitigate)?”

It’s astir cliché astatine this constituent to impeach Facebook of ignoring the interaction its products person connected users and society. The reflection hits a small harder, however, erstwhile it comes from wrong the company.

Facebook rejects the allegation. “At the bosom of these stories is simply a premise which is false,” said spokesperson Kevin McAlister successful an email. “Yes, we're a concern and we marque profit, but the thought that we bash truthful astatine the disbursal of people’s information oregon wellbeing misunderstands wherever our ain commercialized interests lie.”

On the different hand, the institution precocious fessed up to the precise disapproval from the 2019 documents. “In the past, we didn’t code information and information challenges aboriginal capable successful the merchandise improvement process,” it said successful a September 2021 blog post. “Instead, we made improvements reactively successful effect to a circumstantial abuse. But we person fundamentally changed that approach. Today, we embed teams focusing specifically connected information and information issues straight into merchandise improvement teams, allowing america to code these issues during our merchandise improvement process, not aft it.” McAlister pointed to Live Audio Rooms, introduced this year, arsenic an illustration of a merchandise rolled retired nether this process.

If that’s true, it’s a bully thing. Similar claims made by Facebook implicit the years, however, haven’t ever withstood scrutiny. If the institution is superior astir its caller approach, it volition request to internalize a fewer much lessons.

Your AI Can’t Fix Everything

On Facebook and Instagram, the worth of a fixed post, group, oregon leafage is chiefly determined by however apt you are to look at, Like, remark on, oregon stock it. The higher that probability, the much the level volition urge that contented to you and diagnostic it successful your feed.

But what gets people’s attraction is disproportionately what enrages oregon misleads them. This helps explicate wherefore low-quality, outrage-baiting, hyper-partisan publishers bash truthful good connected the platform. One of the interior documents, from September 2020, notes that “low integrity Pages” get astir of their followers done News Feed recommendations. Another recounts a 2019 experimentation successful which Facebook researchers created a dummy account, named Carol, and had it travel Donald Trump and a fewer blimpish publishers. Within days the level was encouraging Carol to articulation QAnon groups.

Facebook is alert of these dynamics. Zuckerberg himself explained successful 2018 that contented gets much engagement arsenic it gets person to breaking the platform’s rules. But alternatively than reconsidering the contented of optimizing for engagement, Facebook’s reply has mostly been to deploy a premix of quality reviewers and instrumentality learning to find the atrocious worldly and region oregon demote it. Its AI tools are wide considered world-class; a February blog post by main exertion serviceman Mike Schroepfer claimed that, for the past 3 months of 2020, “97% of hatred code taken down from Facebook was spotted by our automated systems earlier immoderate quality flagged it.”

The interior documents, however, overgarment a grimmer picture. A presumption from April 2020 notes that Facebook removals were reducing the wide prevalence of graphic unit by astir 19 percent, nudity and pornography by astir 17 percent, and hatred code by astir 1 percent. A record from March 2021, antecedently reported by the Wall Street Journal, is adjacent much pessimistic. In it, institution researchers estimation “that we whitethorn enactment arsenic small arsenic 3-5% of hatred and ~0.6% of [violence and incitement] connected Facebook, contempt being the champion successful the satellite astatine it.”

Those stats don’t archer the full story; determination are ways to trim vulnerability to atrocious contented too takedowns and demotions. Facebook argues, fairly, that wide prevalence of offending contented is much important than the takedown rate, and says it has reduced hatred speech by 50 percent implicit the past 3 quarters. That assertion is of people intolerable to verify. Either way, the interior documents marque wide that immoderate of the company’s nationalist statements exaggerate however good it polices its platforms.

Taken together, the interior documents suggest that Facebook’s halfway approach—ranking contented based connected engagement, past tuning different knobs to filter retired assorted categories aft the fact—simply doesn’t enactment precise well.

One promising alternate would beryllium to absorption connected what respective of the interior documents notation to arsenic “content-agnostic” changes. This is an attack that looks for patterns associated with harmful content, past makes changes to ace down connected those patterns—rather than trying to scan posts to find the offending contented itself. A elemental illustration is Twitter prompting users to work an nonfiction earlier retweeting it. Twitter doesn’t request to cognize what the nonfiction is about; it conscionable needs to cognize if you’ve clicked the nexus earlier sharing it. (Facebook is testing a mentation of this feature.) Unlike policies that people a definite category, similar authorities oregon wellness information, a content-agnostic alteration applies arsenic to each users and posts.

Facebook already does this to immoderate extent. In 2018, it changed the algorithm to prioritize “meaningful societal interactions” betwixt users. Optimizing for MSI meant, for example, that posts that generated a batch of comments—or, for that matter, angry-face emoji—would get a large boost successful the News Feed. As the Wall Street Journal reported successful September, the displacement had dreadful broadside effects: It provided large boosts to sensationalist and outrage-provoking pages and posts, which successful crook raised the unit connected publishers and politicians to cater to the lowest communal denominator. (This isn’t shocking erstwhile you see what kinds of posts make the liveliest remark threads.) It was, successful different words, a bad content-agnostic change. Particularly problematic was a constituent called “downstream MSI,” which refers not to however engaging you volition find a station but however apt you are to reshare it truthful that other radical prosecute with it. Researchers recovered that, for immoderate reason, the downstream MSI metric “was contributing hugely to misinfo.”

To Facebook’s credit, documents amusement that successful 2020, the institution tried to tackle the problem. It stopped ranking by downstream MSI for civic- and health-related content, a determination that researchers predicted would chopped down connected “civic misinformation” by 30 to 50 percent. More recently, McAlister said, it turned the downstream models disconnected “for transgression and calamity content, successful immoderate at-risk regions (e.g. Afghanistan), and for contented astir COVID.” But the institution could inactive spell further. According to an April 2020 document, a subordinate of the integrity squad pitched Zuckerberg connected jettisoning downstream MSI crossed the board, but the CEO was loath to “go broad” with the alteration “if determination was a worldly tradeoff with MSI impact,” meaning a nonaccomplishment successful engagement.

An adjacent bigger reddish emblem than downstream MSI, according to the documents, are what the institution calls “deep reshares”: posts that extremity up successful your provender aft idiosyncratic shares them, and past idiosyncratic other shares that person’s share, and truthful on. One January 2020 probe insubstantial reports that “deep reshares of photos and links are 4 times arsenic apt to beryllium misinformation, compared to photos and links seen generally.” Another interior report, from 2019, describes an experimentation suggesting that disabling heavy reshares would beryllium doubly arsenic effectual against photo-based misinformation than disabling downstream MSI. But Facebook lone turns down recommendations of heavy reshares “sparingly,” McAlister said, due to the fact that the method is “so blunt, and reduces affirmative and wholly benign code alongside perchance inflammatory oregon convulsive rhetoric.”

Here’s 1 past elemental example. It turns retired that a tiny subset of users relationship for a immense stock of radical invitations, sending retired hundreds oregon thousands per day. Groups are a cardinal root of what appears successful the News Feed, making them an businesslike mode to dispersed conspiracy theories oregon incitements to violence. One 2021 papers notes that 0.3 percent of members of Stop the Steal groups, which were dedicated to the mendacious assertion that the 2020 predetermination was rigged against Donald Trump, made 30 percent of invitations. These super-inviters had different signs of spammy behavior, including having fractional of their person requests rejected. Capping however galore invites and person requests immoderate 1 idiosyncratic tin nonstop retired would marque it harder for a question similar that to spell viral earlier Facebook tin intervene.

It’s imaginable that adjacent much extremist betterment is needed, though, to genuinely hole the feed. In her legislature testimony, Haugen argued for replacing engagement-based ranking with axenic reverse chronology: the apical of your provender would simply beryllium the latest station made by idiosyncratic you follow.

An October 2019 station by Jeff Allen, past a Facebook information scientist, argues for yet different approach: ranking contented according to quality. That whitethorn dependable improbable, but arsenic Allen points retired successful the achromatic paper, which helium posted close earlier leaving the institution and which was archetypal reported by MIT Tech Review, it’s already the ground of the world’s astir palmy proposal algorithm: Google Search. Google conquered the net due to the fact that its PageRank algorithm sorted web sites not conscionable by the crude metric of however often the hunt presumption appeared, but whether different salient sites linked to them—a content-agnostic metric of reliability. Today, Google uses PageRank on with different prime metrics to fertile hunt results.

Facebook already crawls the web and assigns prime scores to websites, thing known arsenic Graph Authority, which the institution incorporates into rankings successful definite cases. Allen suggests that Graph Authority should regenerate engagement arsenic the main ground of recommendations. In his post, helium posits that this would obliterate the occupation of sketchy publishers devoted to gaming Facebook, alternatively than investing successful bully content. An algorithm optimized for trustworthiness oregon prime would not let the fake-news communicative “Pope Francis Shocks World, Endorses Donald Trump for President” to rack up millions of views, arsenic it did successful 2016. It would kneecap the teeming industry of pages that station unoriginal memes, which according to 1 2019 interior estimation accounted astatine the clip for arsenic overmuch arsenic 35 to 40 percent of Facebook leafage views wrong News Feed. And it would supply a boost to much respected, higher prime quality organizations, who definite could usage it. (Disclosure: I’m assured this includes WIRED.)

These sorts of changes to Facebook’s ranking algorithm would code problematic contented connected the proviso side, not the request side. They would mostly side-step claims of censorship, though not entirely. (Republican politicians often impeach Google of biased hunt results.) And due to the fact that they don’t beryllium connected connection analysis, they should standard much easy than AI contented moderation to markets extracurricular the US. Which brings america to the adjacent acquisition from Facebook’s employees.

Stop Treating People successful Developing Countries arsenic Second-Class Users 

The astir important findings successful the interior documents interest Facebook’s deficiency of concern successful information and integrity successful overmuch of the non-English speaking world, wherever the immense bulk of its users live. While Facebook often claims that much than 90 percent of hatred code removals hap proactively—that is, done its AI systems—that fig was lone 0.2 percent successful Afghanistan arsenic of January 2021, according to an interior report. The representation is akin successful different processing countries, wherever Facebook appears unwilling to walk what it takes to physique capable connection models.

Arabic is the third-most spoken connection among Facebook users, yet an interior study notes that, astatine slightest arsenic of 2020, the institution didn’t adjacent employment contented reviewers fluent successful immoderate of its large dialects. Another study from the aforesaid twelvemonth includes the astir unbelievable uncovering that, for Arabic-speaking users, Facebook was incorrectly enforcing its policies against coercion contented 77 percent of the time. As overmuch disapproval arsenic Facebook’s integrity efforts get successful the US, those efforts barely exist crossed overmuch of the world. Facebook disputes this conclusion—“Our way grounds shows that we ace down connected maltreatment extracurricular the US with the aforesaid strength that we use successful the US,” McAlister said—but does not contradict the underlying facts. As my workfellow Tom Simonite observes, hundreds of millions of users are “effectively 2nd people citizens of the world’s largest societal network.”

Hopefully the latest circular of nationalist scrutiny volition propulsion Facebook to interruption that trend. A institution that promises to “connect the world” has nary concern being successful a marketplace wherever it can’t connection the baseline of prime power that it offers its American users.

Protect Content Policy from Political Considerations

Outside observers person complained for years that Facebook bases decisions not connected accordant principles but successful effect to unit from almighty governmental figures. A dependable watercourse of news stories implicit the years person documented cardinal moments erstwhile the company’s leaders pulled the plug connected a connection to penalize low-quality publishers aft outcry from Republicans.

This turns retired to beryllium an interior disapproval arsenic well. “The Communications and Public Policy teams are routinely asked for input connected decisions regarding (a) enforcing existing contented policy, (b) drafting caller argumentation and (c) designing algorithms,” wrote 1 information idiosyncratic successful December 2020, soon earlier leaving the company. “Those teams often artifact changes erstwhile they spot that they could harm almighty governmental actors.” (Facebook denies this charge, arguing that nationalist argumentation is lone 1 of galore teams that person a accidental successful contented enforcement decisions.)

Another papers from September 2020 lays retired a elaborate attack for however to hole the problem. Titled “A Firewall for Content Policy,” it archetypal identifies the organizational operation that its writer believes leads to truthful overmuch mischief. The caput of contented argumentation reports to the caput of planetary policy, who reports to the caput of planetary affairs, who reports to main operating serviceman Sheryl Sandberg, who, finally, reports to Zuckerberg. As a result, “External-facing teams, particularly the Public Policy team, are routinely fixed powerfulness successful decision-making astir contented enforcement and the plan of contented policy.” Choices astir what to demote, what to remove, and however to tweak the algorithm indispensable walk 3 layers of absorption acrophobic with keeping almighty governmental figures blessed earlier reaching Zuckerberg.

The researcher sketches a elemental alternative. First, the contented argumentation squad could alternatively study to different unit, similar the cardinal merchandise services division, which successful crook reports straight to Zuckerberg. That would chopped down connected the fig of politically motivated veto points. It besides would spot work for overriding the contented squad much squarely with Zuckerberg.

Second, the writer notes that nether the presumption quo, erstwhile a definite decision, similar a takedown oregon demotion, gets “escalated,” groups including nationalist argumentation get to instrumentality part. A elemental hole would beryllium to support those escalation decisions wrong contented policy. Similarly, the worker argues to bounds the nationalist argumentation division’s engagement successful processing contented rules and successful making changes to the algorithm. “Public Policy could person input connected wide principles utilized to measure changes, but those principles would person to beryllium written, and the mentation of the principles would beryllium solely the work of Content Policy.” It’s a spot similar pro sports: NFL squad owners ballot connected regularisation changes during the offseason, but they’re not down connected the tract telling the refs erstwhile to stroke the whistle.

The worker makes a beardown lawsuit that implementing a firewall “would assistance with pressing problems for Facebook.” Clearly it would beryllium acold from a cure-all. Google and Twitter, the enactment points out, person versions of a firewall, with “trust and safety” teams separated from nationalist policy. Those companies aren’t immune to scandal. But lone Facebook has been consistently shown to crook its ain rules and stated principles to appease almighty governmental actors.

Take Your Own Research More Seriously

Facebook is simply a large company. Not each interior probe uncovering oregon worker proposition is worthy listening to. Still, the vexation expressed successful the leaked files powerfully suggest that Facebook’s leaders person been erring to heavy successful the other direction.

The merchandise of these documents has evidently created a monolithic headache for the company. But it besides reveals that Facebook, to its credit, employs immoderate precise thoughtful radical with bully ideas. The institution should see listening to them more.


More Great WIRED Stories