Bloomberg
Last year, a Facebook user in Sri Lanka posted an angry message to the social network. “Kill all the Muslim babies without sparing even an infant,” the person wrote in Sinhala, the language of the country’s Buddhist majority. “F---ing dogs!”
The post went up early in 2018, in white text and on one of the playful pink and purple backgrounds that Facebook Inc. began offering in 2016 to encourage its users to share more with one another. The sentiment about killing Muslims got 30 likes before someone else found it troubling enough to click the “give feedback” button instead. The whistleblower selected the option for “hate speech,” one of nine possible categories for objectionable content on Facebook.
For years nonprofits in Sri Lanka have warned that Facebook posts are playing a role in escalating ethnic tensions between Sinhalese Buddhists and Tamil Muslims, but the company had ignored them. It took six days for Facebook to respond to the hate speech report. “Thanks for the feedback,” the company told the whistleblower, who posted the response to Twitter. The content, Facebook continued, “doesn’t go against one of our specific Community Standards.”
The post stayed online, part of a wave of calls for violence against Muslims that flooded the network last year. False rumors circulated widely on Facebook claiming Muslims were putting sterilization pills in Buddhists’ food. In late February 2018 a mob attacked a Muslim restaurant owner in Ampara, a small town in eastern Sri Lanka. He survived, but there were more riots in the midsize city of Kandy the following week, resulting in two deaths before the government stepped in, taking measures that included ordering Facebook offline for three days.
The shutdown got the company’s attention. It appointed Jessica Leinwand, a lawyer who served in the Obama White House, to figure out what had gone wrong. Her conclusion: Facebook needed to rethink its permissive attitude toward misinformation. Before the riots in Sri Lanka, the company had tolerated fake news and misinformation as a matter of policy. “There are real concerns with a private company determining truth or falsity,” Leinwand says, summing up the thinking.
But as she began looking into what had happened in Sri Lanka, Leinwand realized the policy needed a caveat. Starting that summer, Facebook would remove certain posts in some high-risk countries, including Sri Lanka, but only if they were reported by local nonprofits and would lead to “imminent violence.” When Facebook saw a similar string of sterilization rumors in June, the new process seemed to work. That, says Leinwand, was “personally gratifying”—a sign that Facebook was capable of policing its platform.
But is it? It’s been almost exactly a year since news broke that Facebook had allowed the personal data of tens of millions of users to be shared with Cambridge Analytica, a consulting company affiliated with Donald Trump’s 2016 presidential campaign. That revelation sparked an investigation by the U.S. Justice Department into the company's data-sharing practices, which has broadened to include a grand jury. Privacy breaches are hardly as serious as ethnic violence, but the ordeal did mark a palpable shift in public awareness about Facebook’s immense influence. Plus, it followed a familiar pattern: Facebook knew about the slip-up, ignored it for years, and, when exposed, tried to downplay it with a handy phrase that Chief Executive Officer Mark Zuckerberg repeated ad nauseam in his April congressional hearings: “We are taking a broader view of our responsibility.” He struck a similar note with a 3,000-word blog post in early March that promised the company would focus on private communications, attempting to solve Facebook’s trust problem while acknowledging that the company’s apps still contain “terrible things like child exploitation, terrorism, and extortion.”
If Facebook wants to stop those things, it will have to get a better handle on its 2.7 billion users, whose content powers its wildly profitable advertising engine. The company’s business depends on sifting through that content and showing users posts they’re apt to like, which has often had the side effect of amplifying fake news and extremism. Facebook made Leinwand and other executives available for interviews with Bloomberg Businessweek to argue that it’s making progress.
Unfortunately, the reporting system they described, which relies on low-wage human moderators and software, remains slow and under-resourced. Facebook could afford to pay its moderators more money, or hire more of them, or place much more stringent rules on what users can post—but any of those things would hurt the company’s profits and revenue. Instead, it’s adopted a reactive posture, attempting to make rules after problems have appeared. The rules are helping, but critics say Facebook needs to be much more proactive.
“The whole concept that you’re going to find things and fix them after they’ve gone into the system is flawed—it’s mathematically impossible,” says Roger McNamee, one of Facebook’s early investors and, now, its loudest critic. McNamee, who recently published a book titled Zucked, argues that because the company’s ability to offer personalized advertising is dependent on collecting and processing huge quantities of user data, it has a strong disincentive to limit questionable content. “The way they’re looking at this, it’s just to avoid fixing problems inherent with the business model,” he says.
Today, Facebook is governed by a 27-page document called Community Standards. Posted publicly for the first time in 2018, the rules specify, for instance, that instructions for making explosives aren’t allowed unless they’re for scientific or educational purposes. Images of “visible anuses” and “fully nude closeups of buttocks,” likewise, are forbidden, unless they’re superimposed onto a public figure, in which case they’re permitted as commentary.
The standards can seem comically absurd in their specificity. But, Facebook executives say, they’re an earnest effort to systematically address the worst of the site in a way that’s scalable. This means rules that are general enough to apply anywhere in the world—and are clear enough that a low-paid worker in one of Facebook’s content-scanning hubs in the Philippines, Ireland, and elsewhere, can decide within seconds what to do with a flagged post. The working conditions for the 15,000 employees and contractors who do this for Facebook have attracted controversy. In February the Verge reported that U.S. moderators make only $28,800 a year while being asked regularly to view images and videos that contain graphic violence, porn, and hate speech. Some suffer from post-traumatic stress disorder. Facebook responded that it’s conducting an audit of its contract-work providers and that it will keep in closer contact with them to uphold higher standards and pay a living wage.
Zuckerberg has said that artificial intelligence algorithms, which the company already uses to identify nudity and terrorist content, will eventually handle most of this sorting. But at the moment, even the most sophisticated AI software struggles in categories in which context matters. “Hate speech is one of those areas,” says Monika Bickert, Facebook’s head of global policy management, in a June 2018 interview at company headquarters. “So are bullying and harassment.”
On the day of the interview, Bickert was managing Facebook’s response to the mass shooting the day before at the Capital Gazette in Annapolis, Md. While the massacre was happening, Bickert instructed content reviewers to look out for posts praising the gunman and to block opportunists creating fake profiles in the names of the shooter or victims, five of whom died. Later her team took down the shooter’s profile and turned victims’ pages into what the company calls “memorialized accounts,” which are identical to regular Facebook pages but place the word “remembering” above the deceased person’s name.
Crises such as this happen weekly. “It’s not just shootings,” Bickert says. “It might be that a plane has crashed, and we’re waiting to find out who was on the plane and whether it was a terror attack. There may be a protest, and people are alleged to have been injured.”
And these are the easy cases, where the lines between good and evil are clear and Facebook has developed a formula for responding. On her laptop, Bickert pulls up a slide presentation from a meeting of the company’s Community Standards group, which gathers every other Thursday morning to come up with new rules. As many as 80 employees participate in the discussions, either in person or virtually. The slides show that, on a Thursday last year, the team discussed what to do with #MeToo posts created by women who named their assailants. If the posts were untrue, they could be construed as harassment of innocent men. In the same meeting, the company evaluated viral stunts that younger users attempt, such as the “condom-snorting challenge,” which, with apologies, involves snorting a lubricated prophylactic up a nostril and pulling it out through the mouth. There are dozens of challenges such as this—the chile pepper challenge, the Tide Pod challenge, and so on—that young people do (or pretend to do) to get views. If these stunts can hurt people, should Facebook stop people from promoting them?
In December, after months of discussion, Facebook added new rules. #MeToo accusations are OK, as long as they don’t encourage retaliation. Challenges are also fine, as long as they don’t encourage bodily harm, which would seem to put condom snorting in a gray area. “None of these issues are black and white,” Bickert says.
In congressional testimony and elsewhere, Facebook has deployed a practiced set of responses to criticism about its content decisions. If interrogated about something on the site that was already forbidden by the Community Standards, executives will reassure the public that such content is “not allowed” or that there is “no place” for it. If there’s no rule yet, Facebook will usually explain that it is trying to fix the problem, was “too slow” to recognize it, and is taking responsibility. The company has said dozens of times that it was “too slow” to recognize Russia’s manipulation of the 2016 U.S. presidential election, Myanmar’s genocide, and ethnic violence in Sri Lanka. But “too slow” could be fairly interpreted as a euphemism for deliberately ignoring a problem until someone important complains.
“They don’t want to be held liable for anything,” says Eileen Carey, a tech executive and activist. Since 2013 she’s kept records of drug dealers posting pictures of pills on the web, some of them captioned as OxyContin or Vicodin. Many of these posts include a phone number or an address where interested users can coordinate a handoff or delivery by mail. They are, in effect, classified ads for illegal opioids.
Carey’s obsession started while she worked for a consulting firm that was helping Purdue Pharma remove counterfeit pills. Most tech companies—including Alibaba, Craigslist, and EBay—were quick to agree to take down these images when Carey alerted them. Facebook and Facebook-owned Instagram were the exceptions, she says.
Carey, who like Zuckerberg is from Dobbs Ferry, N.Y., and lives in the Bay Area, would sometimes end up at parties with Facebook executives, where she’d kill the mood by complaining about the issue. “I started sucking at parties once I started working on the whole getting-rid-of-fake-drugs-on-the-internet thing,” she says. Also since 2013, Carey has kept an eye on the issue, spending a few minutes most days searching for (and reporting) drugs for sale on Facebook and Instagram. Usually she got a dismissive automated response, she says. Sometimes, she got no response at all. At the time, technology companies were advocating at conferences and in research reports for harsher enforcement of drug sales on the anonymous dark web. In reality, Carey came to believe, most of the illicit purchases occur on the regular web, on social media and other online marketplaces. “People were literally dying, and Facebook didn’t care,” she says.
In 2018, Carey began tweeting her complaints at journalists and Facebook employees. In April, Guy Rosen, a Facebook vice president who was training the company’s AI software, sent her a message, asking for more examples of the kind of content she was talking about. “Do a search for #fentanyl as well as #oxys on IG [Instagram] and you’ll see lots of pics of pills, those accounts are usually drug dealers,” she wrote to Rosen. She sent over some Instagram posts of drugs for sale. “I reported these earlier and they are still there in the #opiates search—there are 43,000 results.”
“Yikes,” Rosen wrote back. “This is SUPER helpful.” Facebook finally removed the searchable hashtags from Instagram in April—a week after being criticized by Food and Drug Administration commissioner Scott Gottlieb and just a day before Zuckerberg testified before Congress.
Since then, Carey has kept her eye on news reports from Kentucky, Ohio, and West Virginia, where deaths from opioid overdoses have declined this year. Some articles speculate that the reason may be a rise in community treatment centers or mental health resources, but Carey has a different theory: “The only thing that really changed was the hashtags.”
Even so, Facebook’s drug problem remains. In September the Washington Post described Instagram as “a sizable open marketplace for advertising illegal drugs.” In response, Bickert published a blog post explaining that Facebook blocks hundreds of hashtags and drug-related posts and has been working on computer imaging technology to better detect posts about drug sales. She included a predictable line: “There is no place for this on our services.”