Social Media and Political Misinformation

Public domain image of social media apps on a phone.

Sofia Rest, Staff Writer

What is political misinformation, and how does it spread?

The Library of Congress defines political misinformation as “false information deliberately and often covertly spread. . . in order to influence public opinion or obscure the truth” (Levush). For the past few presidential elections, political misinformation has come to the forefront of many Americans’ minds as a threat to our ability to construct researched opinions and make educated votes. People of all ages have become increasingly dependent on online sources for a variety of genres of news, trends, and social interaction, especially as the ongoing pandemic forces daily screen time to hit all-time highs. With this established dependence on apps and websites gaining traction, the abuse of technology to spread false news across social media platforms has become a tried-and-true method of swaying political campaigns, smearing candidates, and ultimately inciting fear and distrust amongst Americans.

A scientific research paper on the spread of online misinformation, “The spread of true and false news online,” questioned how misinformation clawed such an effective hold onto the vanguard of the most prominent political campaigns. Vosoughi et al. used data from Twitter consisting of Tweets from 2006 to 2017 in an attempt to quantify fake news’ impact and source. They found that “[a]bout 126,000 rumors were spread by ∼3 million people,” as well as the fact that “the top 1% of false news cascades diffused to between 1000 and 100,000 people, whereas the truth rarely diffused to more than 1000 people” (Vosoughi et al.). Curiously, the study also found that Twitter bots often attributed to fake news did not significantly promote fake news more than real news, thus implying that human users were behind the spread. The authors hypothesized that the rapidity of diffusion of false statements in contrast to true statements was due to the emotional reaction preyed upon and incited by misinformation. Another theory attributed it to the novelty of the news, prompting surprise and the urge to share more than if the news were real and already widely known.

What steps are being taken/can we take to stop its spread?

While political misinformation is certainly a threat to Americans’ ability to develop informed opinions, one also ought to consider the consequences of widespread censorship. On the delicate balance between free speech and censorship, the Library of Congress writes, “While the dangers associated with the viral distribution of disinformation are widely recognized, the potential harm that may derive from disproportional measures to counter disinformation should not be underestimated. Unlimited governmental censorship… broad application of emergency powers to block content… draconian penalties for alleged offenders without the ability to present an effective defense[,] and strict enforcement of defamation laws in the absence of journalistic defenses are just a few examples of potential threats to the principle of free speech” (Levush).

However, Dustin Carnahan, an assistant professor of communications at Michigan State University’s College of Communication Arts and Sciences, argues that the First Amendment right of free speech applies only to the federal government, not private companies such as Twitter, Facebook, Instagram, and other social media applications. He presents several different perspectives on how misinformation should be managed. The first view asserts, “[T]he public interest is best served when all content is visible on social media, no matter how inaccurate, hateful or vile. This position holds that while people can judge content and hold speakers accountable themselves, bad ideas will be shown for what they are, disregarded via discussion and more productive conversations will result” (Brooks). But a different view presents the downsides of such a community by arguing that some content may harm the public’s wellbeing by allowing hateful discourse and challenging healthy decision-making.

Many leading tech companies have historically taken a passive approach to this issue in favor of the first perspective Carnahan discussed. Recently, however, we have seen a gradual change in policy as companies make the effort to purge false or inflammatory statements from their sites. After the controversial attack on Capitol Hill in the aftermath of the 2020 election, a slew of social media sites including Twitter, Facebook, and Instagram took steps to restrict, suspend, or even permanently ban Donald Trump or related topics “due to the risk of further incitement of violence” (Fischer). YouTube has accelerated enforcement of election and voter fraud misinformation; a YouTube spokesperson publicly announced in reference to the Capitol Hill attack, “Due to the extraordinary events that transpired yesterday, and given that the election results have been certified, any channel posting new videos with these false claims in violation of our policies will now receive a strike, a penalty which temporarily restricts uploading or live-streaming” (Fischer). Carnahan has stated he expects these companies to continue enacting tighter restrictions in response to future particularly controversial current events.

As companies work to devise restrictions, we must take our own steps to make sure we are receiving and sharing reliable information. Becoming aware of strong emotional response to novel, controversial, or one-sided news is the first step to identifying false, inflammatory information. Before sharing an article or forming an opinion on an issue, make sure you cross-reference the information with reliable, fact-based sources outside of social media applications that rely on algorithms to feed you customized content. Another measure to take is to use fact-based news apps instead of social media to catch up on current events.

Additional Resources

  • This article analyzes numerous media sources for bias and reliability and includes lists of websites to avoid, read with a careful eye, and use for unbiased reporting.
  • This website provides links to articles from the left, right, and center political perspectives. It is a great tool for comparing bias.

Works Cited

Fischer, Sara. “All the platforms that have banned or restricted Trump so far.” Axios, Axios Media, Jan. 2021, www.axios.com/platforms-social-media-ban-restrict-trump-d9e44f3c-8366-4ba9-a8a1-7f3114f920f1.html. Accessed 20 Feb. 2021.

Levush, Ruth. “Government Responses to Disinformation on Social Media Platforms: Comparative Summary.” Library of Congress, 2019, www.loc.gov/law/help/social-media-disinformation/compsum.php. Accessed 20 Feb. 2021.

Brooks, Caroline. “The truth behind fake news and politics on social media.” MSU Today, Michigan State University, 2 June 2020, msutoday.msu.edu/news/2020/the-truth-behind-fake-news-and-politics-on-social-media/. Accessed 20 Feb. 2021.

Vosoughi, Soroush, et al. The spread of true and false news online. Mar. 2018. Science, doi:10.1126/science.aap9559. Accessed 20 Feb. 2021.