WASHINGTON (Sinclair Broadcast Group) — Facebook took additional steps this week to curtail hate speech and extremism, but more than 400 advertisers still went dark on the platform Wednesday as part of a boycott urged by civil rights groups concerned the company is not doing enough to fight hate and misinformation.
Facebook announced Tuesday it is designating a violent network linked to the anti-government “boogaloo” movement—a loosely-affiliated group of extremists advocating a second civil war—as a “dangerous organization” and banning any content praising, supporting, or representing it. The company conducted a “strategic network disruption” shutting down 220 accounts, 28 pages, and 106 groups associated with the network, although non-violent boogaloo content is still allowed.
The move is the latest in a series of efforts by Facebook to demonstrate its commitment to limiting the reach of violent and dangerous content without restricting free and open public debate. It is a fine line, and it is one many feel the company is not walking well, which is why hundreds of businesses have decided not to advertise on Facebook for at least the next month in protest.
“The social media platforms were built on the idea of free speech, and for a long time, they maintained the online space as one where ideas could flow freely. What they’ve encountered in the last few years is an increased toxicity of this space,” said Sarah Kreps, a professor of government at Cornell University and author of “Social Media and International Relations.” With a wave of political activism pushing back against that toxicity, it appears the status quo was becoming unsustainable.
Amid nationwide protests over racial injustice in the wake of George Floyd’s death in police custody, civil rights advocates—including the Anti-Defamation League, NAACP, and Color of Change—launched the “Stop Hate for Profit” campaign in mid-June. They called upon brands to halt advertising on Facebook and Instagram in July “to force Mark Zuckerberg to address the catastrophic effect that Facebook has had on our society.”
Brands that have announced a pause in ad spending include Coca-Cola, Verizon, Volkswagen, Ford, Clorox, Denny’s, and Unilever. CVS joined the boycott Wednesday. Some companies are specifically cutting Facebook and Instagram ads, while others are temporarily stopping social media advertising more broadly, in some cases through the end of the year.
Tensions between Facebook and advertisers have been building for years for a variety of reasons, and some joined a smaller boycott over the company’s controversial data-sharing with Cambridge Analytica in 2018. Brands have often complained about Facebook’s metrics for ad viewing and its failure to keep their ads away from offensive content.
Despite these ongoing concerns, Facebook advertising revenue in the U.S. is expected to top $30 billion in 2020, with major brands making up only a small share of the total. The momentum behind the boycott has dragged down Facebook’s stock value, but it remains unclear how much lasting financial impact it could have.
The blowback from advertisers comes as Facebook also struggles to contain dissent from within over content moderation decisions. Facebook employees have publicly and privately questioned Zuckerberg’s refusal to remove or flag posts by President Donald Trump that Twitter determined “glorified violence” against protesters or spread false claims about voting by mail.
With the list of participants in the boycott growing, Facebook executives held virtual town halls with advertisers and ad agencies Tuesday, detailing actions the company has taken to improve the user experience on Facebook and Instagram but also highlighting the challenges of moderating content on the world’s most widely-used social media platform, according to The Wall Street Journal.
Reuters reported Tuesday that Zuckerberg has agreed to meet with the organizers of the boycott, though previous discussions between top executives and civil rights leaders have been fruitless.
“I think its quite right that civil rights organizations should pressure us to always do better,” Nick Clegg, Facebook’s vice president for global affairs and communications, told Bloomberg Monday. “I think that’s totally fair. We have 3 billion people on our platform around the world.”
In an open letter to advertisers initially posted in AdAge Wednesday, Clegg acknowledged frustrations with hateful content, but he argued leaving up material that is offensive but not in clear violation of standards is preferable to removing it because “exposing it to sunlight is better than hiding it in the shadows.” He also emphasized the positive contributions Facebook has made to society, such as helping spread accurate information about the coronavirus pandemic.
“It is worth remembering that when the darkest things are happening in our society, social media gives people a means to shine a light,” Clegg wrote. “To show the world what is happening, to organize against hate and come together, and for millions of people around the world to show their solidarity. We’ve seen that all over the world on countless occasions — and we are seeing it right now with the Black Lives Matter movement.”
Although Facebook executives have insisted they would not compromise policies ostensibly based on free speech principles under financial pressure, they have announced numerous changes over the last week regarding the platform’s handling of hate speech and false information.
In a post Friday, Zuckerberg revealed Facebook will now ban ads that portray any group as a threat based on race, ethnicity, nationality, religion, gender, immigration status, or sexual orientation. Much like Twitter, Facebook will also now flag content that violates its standards but is deemed newsworthy and valuable to the public interest.
Amid controversy over voting rights and the integrity of elections, Zuckerberg highlighted actions being taken to fight voter suppression and promote accurate information about elections. This includes launching what he dubbed “the largest voting information campaign in American history” aimed at helping 4 million people register to vote, banning posts that contain false claims about voting in the 72 hours before an election, and adding links to authoritative voting information to posts about voting.
Facebook said Monday it will submit to an audit by the Media Ratings Council of its content monetization and brand-safety tools and practices. According to the MRC, the audit, which will likely take at least six months, is intended to determine the effectiveness of Facebook’s methods for ensuring ads are not placed in proximity to content advertisers deem inappropriate.
Despite finding itself at the center of the #StopHateforProfit campaign, Facebook maintains it does a better job of policing hate speech than other platforms. The company claims 90% of hate speech is identified and removed before anyone even reports it—far more than on Twitter or YouTube—but executives also acknowledge they have a responsibility to do more.
“We know we have more work to do,” a Facebook spokeswoman told The New York Times. “Our principles have not changed, but our leaders are rightly spending time with clients and others to discuss the progress we’ve made on the key issues of concern.”
Meanwhile, other social media sites are taking even more aggressive action, and drawing complaints of political bias in the process. Twitch suspended President Donald Trump’s page and Reddit has removed thousands of “subreddit” forums, including one dedicated to Trump supporters and one for listeners of the left-wing “Chapo Trap House” podcast.
Those behind the boycott welcomed Facebook’s changes, but they maintain the reforms still do not go far enough to address their concerns. The Stop Hate for Profit Coalition continues to press for greater accountability, changes to policies regarding public and private groups advocating violence and conspiracy theories, new protections against false claims in political ads, and more support for users who experience harassment.
“None of these initial steps will make a significant dent in the persistent hate and racism so prevalent on the largest social media platform on the planet. That’s why we need to keep up the pressure,” the campaign’s organizers wrote in an update for advertisers Monday.
These are just the latest attempts by Facebook to grapple with the massive technological and logistical challenge of policing billions of posts on public and private pages each day. In May, the company announced members of a new Oversight Board that can review and overturn content moderation decisions, and that group will soon begin hearing cases.
“Up until now some of the most difficult decisions about content have been made by Facebook and, you could say, Mark Zuckerberg,” Helle Thorning-Schmidt, co-chair of the board and former prime minister of Denmark, told reporters in May. “Facebook has decided to change that.”
Speaking to Bloomberg Monday, Clegg contended Facebook has no financial incentive to leave hate on its platform, but he added eliminating all hate speech is simply an unachievable goal.
“I don’t want anyone to imagine we can rid the world of hateful speech because that is part of the human condition,” he said. “Our job is to make sure its minimized as much as we can minimize it.”
According to Kreps, human moderators and artificial intelligence can sweep up enormous amounts of offensive content, but the underlying challenge is determining what is and is not acceptable in the first place. Some may fear Facebook’s latest moves are a sign of acquiescing to the demands of activists and corporations, and that could be a slippery slope.
“The worry from real free speech advocates would be this is just kind of the first step and we’re potentially entering dangerous territory where the marketplace of ideas has been outsourced to a private firm, which has now outsourced it to its advertisers,” Kreps said.
All of this is happening in an increasingly intense political atmosphere, with a presidential election, racial protests, and a global pandemic looming over every decision businesses make. Advertisers are under pressure from progressive groups to speak out against racism, and social media sites are facing scrutiny from both sides for either doing too much or not enough to fight offensive content.
“We’re acutely aware this is a time of great, great sensitivity,” Clegg said.
While Facebook has so far largely resisted calls to moderate President Trump’s posts, Twitter has drawn outrage from the president’s supporters for supposedly silencing conservative users. Trump signed an executive order aimed at challenging decades-old liability protections tech companies enjoy and his allies in Congress are pushing legislation to limit their ability to police content.
Even as Facebook took action against some boogaloo-associated accounts Tuesday, a group of Democratic senators sent Zuckerberg a letter demanding information on the site’s efforts to confront hate speech in general and white supremacy in particular. They also questioned how effectively Facebook is enforcing its existing policies against extremist material.
“The prevalence of white supremacist and other extremist content on Facebook—and the ways in which these groups have been able to use the platform as organizing infrastructure—is unacceptable,” wrote Sens. Mark Warner, Mazie Hirono, and Robert Menendez.
Some on the right have already criticized the advertiser boycott as an effort by liberal groups to shut down free speech or questioned the motives of companies announcing they are joining the movement when they were already scaling back their advertising budgets in response to the coronavirus outbreak. The Wall Street Journal editorial board dismissed the boycott as a “progressive power grab.”
“Woke Fortune 500 firms had better hope that they will make fast and permanent friends among the anticapitalists of the new left because they are fast burning through reservoirs of goodwill among American conservatives,” an editorial stated Monday.
According to Kreps, Facebook and other social media platforms have ventured into “perilous political waters,” and there is a growing risk of overplaying their hand. Instituting a wave of new policies four months before a high-stakes election could spur a backlash over perceptions of partisanship and send users on one end of the political spectrum flocking to platforms they consider more permissive of their views.
“I think these things are always pendulum swings, and right now, the political and normative leverage is with the left... My prediction is these companies may go too far and start bringing down forms of speech that seem innocuous enough that they play into this narrative that platforms have been too political in these decisions,” Kreps said.