3rdPartyFeeds

Inside the room where Facebook decides what 2.3bn people can and can't say

It’s around 9:30am at Facebook’s headquarters in Menlo Park, California, and things are getting heated. Or at least as heated as they ever seem to get in this land of free snacks and bright posters. The subject is nudity, and specifically female nipples: who should be allowed to show them? Should they be hidden from children? And what kind of backlash could these decisions provoke? Read More...
Facebook governs more people than any country on Earth. But who makes its rules? – AFP

It’s around 9:30am at Facebook’s headquarters in Menlo Park, California, and things are getting heated. Or at least as heated as they ever seem to get in this land of free snacks and bright posters. The subject is nudity, and specifically female nipples: who should be allowed to show them? Should they be hidden from children? And what kind of backlash could these decisions provoke? 

One employee, dialing in from Dublin, proposes making an exception to Facebook’s no-nipples policy for indigenous peoples whose traditional clothing leaves their chests uncovered. But colleagues immediately raise doubts, objecting that this could single out non-white people and questioning the wisdom of Facebook deciding whose nudity counts as “traditional” and whose is just obscene. For a moment, everyone talks at once.

It is in this conference room that Facebook decides what 2.3bn people around the world can and cannot say on its service. Every two weeks, representatives from its policy, research, software design and content moderation teams meet to consider proposed changes to its voluminous speech rules. Those changes then filter down to Facebook’s small army of content moderators – 15,000 strong at the last count, working in more than 40 languages – and are enforced upon users from India to Illinois.

For a long time, Facebook has insisted that these rules are the product of extensive consultation and robust internal debate. Until recently, however, it has rarely let outsiders into that process. Now it is opening up, and the Telegraph was allowed to sit in on two such meetings, in February and in April, and spoke to Facebook employees involved with it.

Together, they reveal how the world’s biggest social network is dealing with the responsibilities of its global dominance – and how it handles the strange, complicated and sometimes surreal questions which that position forces it to answer.

The room where it happens

Once, before Facebook grew so big, it was far more relaxed about policing speech. Its first rulebook, in 2008, was written entirely by one man, Dave Willner, and was one page long. It was basically just a list of different things that moderators should definitely remove (chiefly “Hitler and naked people”). For everything else, it instructed them to go with their gut. The rule was: “Feel bad? Take it down.” At first, Mr Willner removed many of the bad posts personally.

That changed, though, within a year. Mr Willner drew up a new speech code, 15,000 words long. And as Facebook battled ethical crises, advertiser revolts and regulatory scrutiny on its breakneck journey to governing more people than any nation on Earth, the code kept growing. Today its public rules are around 25 pages long, but its internal moderation guidelines, at the last leak, were 1,400.

<figcaption class="C($c-fuji-grey-h) Fz(13px) Py(5px) Lh(1.5)" title="Facebook’s lush HQ at Menlo Park is a strange setting for a world government Credit: Michael Short/Bloomberg ” data-reactid=”32″>

Facebook’s lush HQ at Menlo Park is a strange setting for a world government Credit: Michael Short/Bloomberg

Changes to the rules can come from many sources. Sometimes they are suggested by outside experts, or forced upon Facebook by some new controversy. Sometimes Facebook’s content policy team, which manages the rulebook, researches and proposes new rules for itself. Sometimes Mark Zuckerberg, Facebook’s chief executive, and Sheryl Sandberg, its chief operating officer, take a personal interest – but they have been overruled before.

Each change then gets its own working group, managed by the content policy team but including employees from across the company. A lot of the real work happens on Facebook’s internal social networks. But to come into force, they have to pass through the bi-weekly meeting, officially known as the Content Standards Forum. Facebook claims the CSF has existed in some form for about six years, but it has only recently become so formal. Today it has become Facebook’s chief rule-making apparatus.

<p class="canvas-atom canvas-text Mb(1.0em) Mb(0)–sm Mt(0.8em)–sm" type="text" content="Only some of the CSF are actually sitting in Menlo Park. Other teams dial in from offices across the glove, from other conference rooms with whimsical names such as “A Stew to a Kill” and “Iain M Banks" (the late Scottish sci-fi writer has a cult following in Silicon Valley). The attendees are pretty diverse, not just professionally but demographically: when the Telegraph visits, the main table hosts more women than men and is about 50pc white. That makes sense, given the dizzying range of countries and cultures which are affected by the decisions made in this room.” data-reactid=”42″>Only some of the CSF are actually sitting in Menlo Park. Other teams dial in from offices across the glove, from other conference rooms with whimsical names such as “A Stew to a Kill” and “Iain M Banks” (the late Scottish sci-fi writer has a cult following in Silicon Valley). The attendees are pretty diverse, not just professionally but demographically: when the Telegraph visits, the main table hosts more women than men and is about 50pc white. That makes sense, given the dizzying range of countries and cultures which are affected by the decisions made in this room.

<p class="canvas-atom canvas-text Mb(1.0em) Mb(0)–sm Mt(0.8em)–sm" type="text" content="Often, the CSF is chaired by Monika Bickert, a former Chicago prosecutor who is now one of Facebook’s top policy deciders. She is an old-fashioned believer in free speech who has complained about outrage on university campuses, but also a pragmatist who once literally rolled her eyes when a Vanity Fair reporter asked her about the company’s official mission to “make the world more open and connected”. Sheryl Sandberg also sometimes attends, though the Telegraph did not see her.” data-reactid=”43″>Often, the CSF is chaired by Monika Bickert, a former Chicago prosecutor who is now one of Facebook’s top policy deciders. She is an old-fashioned believer in free speech who has complained about outrage on university campuses, but also a pragmatist who once literally rolled her eyes when a Vanity Fair reporter asked her about the company’s official mission to “make the world more open and connected”. Sheryl Sandberg also sometimes attends, though the Telegraph did not see her.

‘Profane animal terms’

The morning of Tuesday, April 9 found the CSF discussing the finer points of swearing. Who has the right to say the word “c—“, and under what circumstances? Should “dick” and “cock” be considered as offensive as “bitch”? And also, what should Facebook do about cultures where the c-word is used affectionately, like in Australia and Glasgow? One expert is quoted as saying that to some people, “c—” is just a form of “verbal cuddling”, prompting laughter throughout the room.

But this was not some puritannical debate about the boundaries of politeness. The proposal before the CSF was about stopping misogynist harassment. Experts had told Facebook’s research team that “female-gendered cursing” – swear words targeted specifically at women – “feel especially intense” to the person being harassed. The question was how to make sure Facebook users feel safe, and crack down on the most intense forms of harassment, without censoring “political discussion, heated debate or colloquial language”.

<figcaption class="C($c-fuji-grey-h) Fz(13px) Py(5px) Lh(1.5)" title="A scene from Facebook’s ‘war room’, a temporary facility built to help it monitor the spread of disinformation during the 2018 election season Credit: Noah Berger/AFP ” data-reactid=”54″>

A scene from Facebook’s ‘war room’, a temporary facility built to help it monitor the spread of disinformation during the 2018 election season Credit: Noah Berger/AFP

<p class="canvas-atom canvas-text Mb(1.0em) Mb(0)–sm Mt(0.8em)–sm" type="text" content="This is a common dilemma at Facebook: the company never wants to be seen as intervening in a country's political debates, but if its users don't feel safe then they may not stick around for long. Twitter, a much smaller network, has learned this the hard way: its well-earned reputation for abuse, toxicity and spam bots has hurt its share price and driven some users away.” data-reactid=”58″>This is a common dilemma at Facebook: the company never wants to be seen as intervening in a country’s political debates, but if its users don’t feel safe then they may not stick around for long. Twitter, a much smaller network, has learned this the hard way: its well-earned reputation for abuse, toxicity and spam bots has hurt its share price and driven some users away.

The same dilemma comes up in a presentation about Facebook’s “newsworthiness” exemption for hate speech. Sometimes, moderators are told to leave hate speech online because it is “newsworthy or important to the public interest”.

The CSF is shown two examples: a post by a Polish politician referring to a transgender woman as “it”, which is removed for dehumanising language, and a video recorded by a Hungarian politician at a gay pride march, lamenting the rise of LGBT rights and the arrival of immigrants. Despite some of its language, it stays online; it is a substantive criticism of the current social order by an elected representative.

One member of Facebook’s policy team worries that such exemptions might be giving politicians a perverse incentive to push the boundaries. He is promptly press-ganged into the working group. 

A decision is made

One problem with any proposed change is that it has to be enforceable by those 15,000 moderators. Many of them are contractors rather than employees, and there have been persistent concerns about their welfare and their pay.

<p class="canvas-atom canvas-text Mb(1.0em) Mb(0)–sm Mt(0.8em)–sm" type="text" content="An investigation by The Verge found moderators in an outsourced office in Arizona having panic attacks and long-term trauma from viewing horrific material, constantly joking about suicide, having their toilet breaks micromanaged and even having sex in bathrooms to relieve their stress through a momentary thrill. One former moderator has sued Facebook for giving her PTSD.” data-reactid=”64″>An investigation by The Verge found moderators in an outsourced office in Arizona having panic attacks and long-term trauma from viewing horrific material, constantly joking about suicide, having their toilet breaks micromanaged and even having sex in bathrooms to relieve their stress through a momentary thrill. One former moderator has sued Facebook for giving her PTSD.

These moderators are reportedly subject to strict targets, and the primary metric they are assessed on is “accuracy”. Ambiguous rules could therefore expose them to merciless disciplinary action, as well as paralyse the system. (A Facebook executive said that some outsourcing companies might be lagging behind changes in Facebook’s policies, but argued that there has to be some way of assessing moderators’ performance.)

<p class="canvas-atom canvas-text Mb(1.0em) Mb(0)–sm Mt(0.8em)–sm" type="text" content="This comes up during a discussion of Facebook's rules on the promotion of cannabis products. A working group is being formed to consider whether these rules should be relaxed, but not all cannabis products are equal. Different ingredients are subject to different rules, and there is some concern as to whether the moderators, handling hundreds of pieces of content every day, will be able to tell which products are banned and which are not.&nbsp;” data-reactid=”66″>This comes up during a discussion of Facebook’s rules on the promotion of cannabis products. A working group is being formed to consider whether these rules should be relaxed, but not all cannabis products are equal. Different ingredients are subject to different rules, and there is some concern as to whether the moderators, handling hundreds of pieces of content every day, will be able to tell which products are banned and which are not. 

<figcaption class="C($c-fuji-grey-h) Fz(13px) Py(5px) Lh(1.5)" title="Sheryl Sandberg, Facebook’s chief operating officer, oversees its policy apparatus Credit: Lino Mirgeler/AFP/Getty ” data-reactid=”74″>

Sheryl Sandberg, Facebook’s chief operating officer, oversees its policy apparatus Credit: Lino Mirgeler/AFP/Getty

Back to swearing: one proposal is for Facebook to draw up specific lists of banned curse words in every country in which Facebook operates, and apply different rules in different places. That would cover outliers like Dutch, in which the strongest insults are “kanker”, meaning cancer, and “kankerlijer”, meaning cancer sufferer. Dutch people have been prosecuted for referring to police officers as cancer sufferers. But these national lists would be complex and burdensome for Facebook to maintain, and may not be worth the effort.

Another option would be to ban all gendered cursing, including profane genitalia terms (“c—” and “dick”) and profane animal terms (“bitch” and “cock”). But “dick” and “cock” are not really taboo for most people, and removing them would risk “possible over-enforcement on discourse around public figures, which may limit political criticism”. Should Facebook users really be banned from calling their president or prime minister a dick?

In the end the CSF decides to ban only female-gendered cursing, such as “c—“, “pussy” and “bitch”, which targets private individuals. (Public figures get slightly different rules.) An exemption is made for curses used as terms of endearment. Facebook’s experts agreed that these were the most intense forms of cursing, not only for women but for men too (would most men be more offended by being called a pussy or a dick?).

There is no voting at the CSF, just consensus, and this recommendation passes without dissent. It will go live starting in May. 

Black kids and white men

Policing such a wide range of content requires truly baroque detail, sometimes to the point of absurdity. Facebook’s rulebook is a system of interlocking prohibitions which balance and support each other like elements of a computer program. Like a computer program, they also sometimes produce glitches. That was how, in 2017, Facebook ended up being excoriated in the US Congress for censoring invective about “white men” but not “black children”. 

This happened because of a quirk on the way Facebook prioritised different elements of people’s identity. Its old hate speech rules banned threatening violence against “protected characteristics”, such as race, gender and religion. But if the post in question also included reference to a “subset” of someone’s identity – such as being a professor or being an old person – the subset took precedent over the protected characteristic, effectively negating it. 

The rationale was that Facebook did not want to ban attacks on, say, ballerinas, and could not tell whether an attack on “Jewish ballerinas” was attacking them for their Jewishness or for their dance skills. Hence, “white men” was a combination of two protected characterstics, whereas “black children” was a protected characteristic negated by a subset. In late 2017, this whole system was overhauled and replaced with a new one, featuring three tiers of harmful speech.

That was also the year that the Me Too scandal erupted. Facebook was flooded with statements such as “men are scum” and “men are pigs”, which were unambiguously hate speech under the rules of the time: “dehumanising language” against a protected characteristic. After many justifiably angry women got banned, Mr Zuckerberg asked the CSF to consider reforming the policy. But there were strong doubts about Facebook sending the message that men were immune to hate speech, and this rule is still in place today. 

The problem of nudity

The gendered cursing proposal passed without too much trouble. Other proposals are more contentious.

<p class="canvas-atom canvas-text Mb(1.0em) Mb(0)–sm Mt(0.8em)–sm" type="text" content="For years, Facebook has banned images of female nipples. In this case, female is defined as belonging to anyone who identifies as female, so that a transgender man who has not had a masectomy would be exhibiting male nudity &nbsp;(discussion is still ongoing as to how this should apply to people who identify as neither male nor female).&nbsp;” data-reactid=”89″>For years, Facebook has banned images of female nipples. In this case, female is defined as belonging to anyone who identifies as female, so that a transgender man who has not had a masectomy would be exhibiting male nudity  (discussion is still ongoing as to how this should apply to people who identify as neither male nor female). 

<p class="canvas-atom canvas-text Mb(1.0em) Mb(0)–sm Mt(0.8em)–sm" type="text" content="But this policy has come come under fire from breastfeeding mothers, breast cancer survivors, transgender people sharing pictures of their transitions and the prime minister of Norway. One campaign urged Facebook to "free the nipple". Some feminists object that treating women's bodies as more obscene than men's is fundamentally sexist. Even now Facebook is considering relaxing a separate policy on child nudity, because many people want to share innocent, non-sexual pictures of their children or of themselves as children.” data-reactid=”90″>But this policy has come come under fire from breastfeeding mothers, breast cancer survivors, transgender people sharing pictures of their transitions and the prime minister of Norway. One campaign urged Facebook to “free the nipple”. Some feminists object that treating women’s bodies as more obscene than men’s is fundamentally sexist. Even now Facebook is considering relaxing a separate policy on child nudity, because many people want to share innocent, non-sexual pictures of their children or of themselves as children.

 In response to these campaigns, Facebook has carved out exceptions for pregnancy, breastfeeding, protests and medical matters. Increasingly, it is also having to make case by case allowances for “cultural” nudity, such as the kind displayed by indigenous Brazilians. 

The CSF is shown a slide which says that experts consulted by Facebook “overwhelmingly” agreed it should “allow more nudity on the platform”. The question, then, is how much.

The most radical option is to ban female nipples only if the image is “sexually suggestive”, and let all others go free, albeit with an “age gate” to hide them from young users. This would be a new world, with men and women treated equally and no more awkward carve-outs and exceptions. It would also risk flooding Facebook with borderline pornography, and the age gate might be taken to imply a negative judgement. 

Instead, the policy team recommends formalising an exception for “cultural/indigenous nudity”. In parts of Nairobi, they explain, female toplessness is not even considered nudity, so that some people were confused by the idea of making an “exception” at all.

But quickly this proposal meets respectful but vigorous objections. An Africa expert notes that all the examples feature brown people, and is concerned that the policy will single them out. Lots of white people, she argues, have “cultural” nudity too: will this policy cover Finnish skinnydippers, topless sunbathers in Florida or celebrants at Mardi Gras? If not, why not? And if they were allowed, how would a moderator even tell that a nude bather in Finland was really from Finland? 

A senior policy person in DC adds that this line will be hard for Facebook to defend in public. It will put the company in the awkward position of having to explain just how it decides whose nudity is “cultural” and whose is obscene. There is also a concern that a cultural exemption would create a loophole allowing unscrupulous users to spread images of indiginous people without their consent. That could lead to real-world harm, and possibly suicide. 

The original speaker defends the policy, but there is no consensus. It’s back to the drawing board. 

Just another Tuesday

<p class="canvas-atom canvas-text Mb(1.0em) Mb(0)–sm Mt(0.8em)–sm" type="text" content="If these discussions sometimes seem surreal in their exactitude, there is a reason. For years Facebook has been accused of taking decisions in secret without properly thinking them through. Its old motto of "move fast and break things" was retired in 2014, but the words still haunt it, and its critics never tire of applying them to Facebook’s effect on democracy and society.&nbsp;” data-reactid=”106″>If these discussions sometimes seem surreal in their exactitude, there is a reason. For years Facebook has been accused of taking decisions in secret without properly thinking them through. Its old motto of “move fast and break things” was retired in 2014, but the words still haunt it, and its critics never tire of applying them to Facebook’s effect on democracy and society. 

The motto of the CSF, though, seems to be “move slowly and question everything”. Policy changes can take months to wind their way through this system. And instead of keeping it under wraps, Facebook is now inviting reporters to watch these decisions being made, as well as publishing regular minutes on its website. In future, the company says, it may add a change tracker to its rulebook so that everyone can see what has changed and when.

<p class="canvas-atom canvas-text Mb(1.0em) Mb(0)–sm Mt(0.8em)–sm" type="text" content="This is just one part of a wider push for greater transparency. By next year, Mr Zuckerberg promises, Facebook will also have a "supreme court" to hear users’ appeals against moderation decisions. Once that is established, the world be able to see how Facebook’s rules are made and also how they are enforced.” data-reactid=”108″>This is just one part of a wider push for greater transparency. By next year, Mr Zuckerberg promises, Facebook will also have a “supreme court” to hear users’ appeals against moderation decisions. Once that is established, the world be able to see how Facebook’s rules are made and also how they are enforced.

This transparency is hardly perfect. The CSF is still technically a closed and private meeting, and journalists who attend do so at Facebook’s pleasure. It is considering making access more formal, but there are no promises. Perhaps this makes sense, given that CSF members are not elected politicians but only private employees of a private company which just happens to be acting like a government. 

The other big hole in Facebook’s transparency is moderation itself. The company’s outsourced moderation centres are shrouded in secrecy, which it justifies on the basis that contractors need to be protected from angry users seeking revenge for contentious decisions. That’s a fair worry, but of course it also means there is little scrutiny. It is hard to get detailed information from Facebook about what welfare standards it demands from its contractors.

Only once has Facebook has invited a journalist, Casey Newton, to visit one of its moderation centres in the USA – and that was only after he told the company he had already spoken to numerous anonymous leakers.

But what the CSF does show is that someone inside Facebook really is thinking carefully about how its rules will change the world. Mr Zuckerberg and his lieutenants often talk about the “trade-offs” Facebook has to make, and here they are: freedom versus safety, ideals vs operational resources, equality of outcome versus equality of rules.

The end of the meeting is clean. The chair asks if anyone has any further questions. If not, it’s thanks all round and back to your departments. The attendees don’t mill around; they all have other meetings to go to. This is just a normal Tuesday, and in two weeks’ time there will be another pile of bizarre and byzantine decisions for Facebook to make.

Read More

Add Comment

Click here to post a comment