OnlySky Quick Take

Snapshots of major issues

Reading Time: 2 minutes

Most Americans agree that social media is driving disinformation. Lies on social media can impact behavior and change the political beliefs of its user base. But there is no consensus about how government should respond.

Some believe that the government should step in and regulate social media–and by extension the big tech companies–and force them to remove disinformation from their platforms. Others believe that regulating social media won’t do much, and that disinformation will always find a way to reach a receptive audience.

If governments really did push social media companies to censor clear disinformation, would it have a political impact? 

Luckily, there are some instances of social media disinformation bans that we can look at. YouTube banned vaccine misinformation on its platform, and Twitter famously banned President Trump for his role in the January 6th insurrection and spreading political violence. Research has found that election misinformation dropped by 73 percent after Trump was banned from Twitter.

There is also evidence that fewer election misinformation videos were shared on other sites after YouTube cracked down on the practice in December of 2020. It is clear that these bans can be effective at dramatically reducing the amount of disinformation that is shared and consumed by users. But why don’t social media platforms opt to ban disinformation campaigns outright? It comes down to engagement.

Disinformation is one of the most engaging forms of content.

Social media companies are designed to push its users to engage with each other. And disinformation is one of the most engaging forms of content out there. Social media companies make money through advertisers who pay for placement based on a platform’s engagement metrics and number of active users.

These platforms are thus incentivized to ignore disinformation and allow it to slide–even if it is harmful–in order to increase their engagement numbers. We would expect social media sites to pursue only the most narrow versions of bans on certain types of content. Going too wide could cost them engagement, users, and revenue. 

The most effective way to fight disinformation on social media is likely to push for government intervention into the market. Leaving disinformation up to the social media companies themselves will probably yield an extremely narrow content ban.

But there are also risks to government action. Democrats and Republicans likely have very different ideas of what “disinformation campaigns” look and sound like. Democrats may be talking about accounts pushing lies about the Covid-19 vaccine, while Republicans might consider anti-voter suppression groups or critical race theory advocates to be pushing disinformation. These kinds of disagreements over what disinformation is, and how to fight against it, make it that much less likely that Congress will make a major play to regulate social media companies in the near future. 

Avatar photo

Marcus Johnson

Marcus Johnson is a political commentator and a political science Ph.D. candidate at American University. His primary research focus is the impact of political institutions on the racial wealth gap.