×

Our award-winning reporting has moved

Context provides news and analysis on three of the world’s most critical issues:

climate change, the impact of technology on society, and inclusive economies.

Facebook, Twitter Trump bans: what are the rules?

by Avi Asher-Schapiro | @AASchapiro | Thomson Reuters Foundation
Wednesday, 5 May 2021 18:30 GMT

Supporters of U.S. President Donald Trump climb on walls at the U.S. Capitol during a protest against the certification of the 2020 U.S. presidential election results by the U.S. Congress, in Washington, U.S., January 6, 2021. REUTERS/Jim Urquhart

Image Caption and Rights Information

As the Facebook oversight board upholds Trump's suspension, what are the rules on content moderation?

By Avi Asher-Schapiro

SANTA CRUZ, May 5 (Thomson Reuters Foundation) - Facebook Inc's oversight board on Wednesday upheld the company's suspension of former U.S. President Donald Trump, saying his posts on the platform had "created an environment where a serious risk of violence was possible".

Facebook - and social media platforms like Twitter - blocked Trump over concerns of further violent unrest following the Jan. 6 storming of the U.S. Capitol by Trump supporters.

But the board, which Facebook created as an independent body to review its treatment of problematic material, said the social media giant "did not follow a clear, published procedure", when it extended Trump's suspension indefinitely.

"In applying a vague, standardless penalty and then referring this case to the Board to resolve, Facebook seeks to avoid its responsibilities," the board said in a statement on Wednesday.

Tech platforms have grappled in recent years with how to police world leaders and politicians that violate their guidelines.
Here are some key facts about content-moderation rules in light of todays's decision:

When can social media platforms suspend your account?

From Twitter, to Facebook and YouTube, all major social media platforms have terms of service, or community standards, which set out the rules for their users.

While law enforcement agencies can also ask platforms to remove posts that may break the law, in the United States, the companies exercise discretion over what legal content can be posted.

Individual posts that fall foul of the companies' guidelines can be removed, and accounts that repeatedly violate them can be locked, restricted or even removed completely.

Facebook, for instance, has a system of "strikes" where pages and accounts that violate rules consistently can be removed as repeat offenders.

Many platforms have specific rules for public figures, and for posting about elections or civil events.

Before the Nov. 3 U.S. election, Facebook announced a new set of rules for posts concerning the ballot, including a policy of removing posts that spread misinformation about the voting process.

Last year, Twitter unveiled specific rules for the accounts of world leaders, saying it was in the public interest to allow leaders a wider latitude to post.

U.S. President Donald Trump speaks during a rally to contest the certification of the 2020 U.S. presidential election results by the U.S. Congress, in Washington, U.S, January 6, 2021. REUTERS/Jim Bourg

How did the platforms react to the Capitol violence?

After the storming of the Capitol, all major social media companies took some action against Trump's own accounts.

Twitter and Facebook froze his accounts, suspending him from posting further.

Twitter suspended the president's account indefinitely on Jan. 8, citing the risk of "further incitement of violence."

The platforms had previously stopped short of locking Trump's accounts even when he was accused of posting false information or stoking violence - instead opting to add labels to the president's posts, such as during the Black Lives Matter protests.

As violence flared in Washington in January, the platforms said they would scrutinize all posts related to the events from participants and the wider public.

Facebook said it would remove posts that offered "praise and support of the storming of the U.S. Capitol," and add labels to those that incorrectly challenged the election result.

It also vowed to increase the use of artificial intelligence to flag content that might violate its policies.

In its decision on Wednesday, the oversight board said "Facebook should publicly explain the rules that it uses when it imposes account-level sanctions against influential users."

Facebook's head of global affairs Nick Clegg said the company "will now consider the board’s guidance and develop a response that is clear and proportionate. In the meantime, Mr. Trump's accounts remain suspended." 

Trump responded with his own statement, called the decision and his banning across tech platforms "a total disgrace" and said the companies would "pay a political price."

Others called for Facebook to move to make Trump's ban permanent.

"President Trump was left unchecked to use the power and influence of social media to spread lies, discrimination, and violent rhetoric," said Sherrilyn Ifill, the head of the NAACP Legal Defense and Educational Fund.

"We hope that after the reexamination called for by the Oversight Board, Facebook will continue its permanent ban of Mr. Trump," she told the Thomson Reuters Foundation in emailed comments.

An explosion caused by a police munition is seen while supporters of U.S. President Donald Trump gather in front of the U.S. Capitol Building in Washington, U.S., January 6, 2021. REUTERS/Leah Millis

Why have content-moderation rules sparked debate?

Social media platforms have been criticized for not doing enough to police content on their platforms, being too quick to block or restrict controversial content and for applying their rules unevenly.

The platforms tend to have a U.S-centric approach, leading to sloppy and uneven rule enforcement elsewhere, said Jillian York, director for freedom of expression at the digital rights group the Electronic Frontier Foundation.

In 2018, Reuters documented a pattern of Facebook allowing posts urging hatred against the minority Rohingya Muslim population in Myanmar amid ethnic violence.

Facebook was also slow to block homophobic hate speech in Arabic, while YouTube quickly deleted videos of potential war crimes evidence in Syria, the Thomson Reuters Foundation has reported.

"There are valid arguments for removing violent content from the public view," York said. "But there are indeed risks that blanket policies, implemented using automation, could remove key newsworthy content as well."

The risk of erasing posts that may have long-term value, for example for law enforcement, increases if companies make rushed decisions, said Jeff Deutch, director of operations and research at Mnemonic, which focuses on archiving digital records.

"It's important to archive the vast amount of online content created by and about American right-wing extremists," he said.

"Already, we're seeing countless narratives forming, some in good faith and some in bad, and archiving as much as possible will help in research and holding perpetrators accountable, and create historical records," he added.

This article was updated on Wednesday, 5 May 2021 to include information about the Trump Facebook Oversight Board decision.

Related stories:

Questions swirl about possible racial bias in Twitter image function

Instagram apologises for removing images of Black British model

Remove or restore? Facebook Oversight Board wades into South Caucasus culture dispute

(Reporting by Avi Asher-Schapiro @AASchapiro, editing by Helen Popper and Zoe Tabary. Please credit the Thomson Reuters Foundation, the charitable arm of Thomson Reuters, that covers the lives of people around the world who struggle to live freely or fairly. Visit http://news.trust.org)

-->