Facebook's big new experiment in governance

What if platform policies were written in part by their users?

Facebook's big new experiment in governance
“Facebook logo on a ballot box, digital art,” by DALL-E

In June, I wrote that to build trust, platforms should try a little more democracy. Instead of relying solely on their own employees, advisory councils, and oversight boards, I wrote, tech companies should involve actual users in the process. Citing the work Aviv Ovadya, a technologist who recently published a paper on what he calls “platform democracy,” I suggested that social networks could build trust by inviting average people into the policymaking process.

I didn’t know it at the time, but Meta had recently finished a series of experiments which tried to do just that. From February to April, the company gathered together three groups across five different countries to answer the question: what should Meta do about problematic climate information on Facebook? 

The question came as watchdogs are increasingly scrutinizing the company’s approach to moderating misleading information about the environment. Last year the Guardian reported on an analysis performed by the environmental group Stop Funding Heat that found 45,000 posts downplaying or denying the climate crisis. And in February, after Meta promised to label climate misinformation, a report from the watchdog group Center for Countering Digital Hate found that “the platform only labeled about half of the posts promoting articles from the world's leading publishers of climate denial,” according to NPR.

Against that backdrop, Meta hired a policy consulting firm named Behavioural Insights Team, or BIT, to bring Facebook users into policy development process. Specifically, users they were asked what Meta should do about “problematic information,” which BIT defined as “content that is not necessarily false, yet expresses views that may contain misleading, low quality, or incomplete information that can likely lead to false conclusions.”

Meta wouldn’t give me any examples of what it considers problematic climate speech. But I can imagine panels being asked whether Facebook should intervene if, for example, a user with a big following sometime this winter asks something like “if climate change is real, why is it cold outside?”

At all the big platforms today, average users do not have a say on how this question gets handled. Instead, it’s left to company executives and their policy teams, who often do consult experts, human rights groups, and other stakeholders. But the process is opaque and inaccessible to platform users, and in general has undermined confidence in the platforms. It’s hard to put trust in a policy when you have no idea who made it or why. (Not to mention who enforces it, or how.)

For its experiment, Meta and BIT worked to find about 250 people who were broadly representative of the Facebook user base. They brought them together virtually across two weekends to educate them about climate issues and platform policies, and offered them access to outside experts (on both climate and speech issues) and Facebook employees. At the end of the process, Facebook offered the group a variety of possible solutions to problematic climate information, and the group deliberated and voted on their preferred outcomes.

Facebook wouldn’t tell me what the groups decided — only that all three groups reached a similar consensus on what ought to be done. Their deliberations are now being taken under advisement by Facebook teams working on a policy update, the company told me.

In a blog post today, BIT said participants expressed high satisfaction with the process and its outcomes:

We found high amounts of both participant engagement and satisfaction with the deliberative process. As importantly, they demonstrated compelling evidence that participants could engage in meaningful and respectful deliberation around a complex topic. […] 

Participants were impressed by how their groups were respectful of the wide range of opinions and values held across the group. As one participant commented: “I was going into this [assembly] knowing that not everyone is going to have the same opinions or feelings or thoughts as me… At the end of the day, we are not going to shame each other for how we felt or what we thought.”  They were also pleased at how their groups came together to reach a decision. One participant reflected that “[e]everyone was very courteous, and I was surprised by the amount of common ground seemingly reached.”

Meta was impressed with the results, too, and plans to run further experiments in platform democracy.

“We don't believe that we should be making so many of these decisions on our own,” Brent Harris, vice president of governance at the company, told me in an interview. “You've heard us repeat that, and we mean it.”

Harris helped to oversee the creation of the Oversight Board, a somewhat controversial but (I’ve argued) useful tool for delegating authority on some matters of content moderation and pushing Meta to develop more open and consistent policies. Now Harris has turned his attention to platform democracy, and says he’s encouraged by the early results.

“We think that if you set this up the right way, that people are in a great position to deliberate on and make some of the hard decisions (around) trade-offs, and inform how we proceed,” Harris said. “It was actually really striking how many folks, when they came together, agreed on what they thought the right approach would be.”

In a survey after the process, 80 percent of participants said Facebook users like them should have a say in policy development. (I’d love to ask the other 20 percent a few questions!)

Promising though the early results may be, platform democracy is not a guaranteed feature of Facebook in the years to come. More executives and products teams need to buy into the idea; the process needs to be refined and made cheaper to run; and there are more experiments to conduct on using deliberative processes with specific groups or in specific geographies.

But in a world where, thanks to Texas and the rogue 5th Circuit of Appeals, platforms are at risk of losing the right to moderate content at all, Meta and its peers have every incentive to explore bringing more people into the process. With trust in tech companies at or near all-time lows, it’s clear that relying solely on in-house policy teams to craft platform rules isn’t working as intended for them. It may be time to give people more of a voice in the process — before the Supreme Court decides that, when it comes to regulating speech, the platforms don’t deserve any voice at all.


Governing


Industry


Those good tweets


Talk to me

Send me tips, comments, questions, and deliberative processes: casey@platformer.news.