Misinformation…disinformation…fake news…alternative facts…we have become all too familiar with all of these words and phrases in recent years. False information plagues the internet, especially on social media where everyone has a platform to create “fake news” or to spread it knowingly or not. That is the difference between misinformation, where someone is duped into believing what they are sharing is true, and disinformation, where someone intentionally spreads false information. The spread of misinformation has been present on the internet for most of its life, and yet the problem persists. So who is in charge of fighting it? And since they appear to be doing an inadequate job, is it time for someone else to take on that responsibility?
We’ll look particularly at this problem in the context of social media because this is where it can get the most views and therefore be the most dangerous. Right now, the fight against false information on social media is left up to each individual site. “Every platform has a different policy,” Professor Brad Conaway says. Conaway is an assistant professor for the School of Journalism and New Media at the University of Mississippi who specializes in social media.
According to Conaway, Facebook has seen the highest rate of misinformation out of all social media platforms. “So what do they do? They hire fact-checkers, they hire moderators.” Many will raise concerns about the fact-checkers themselves. Maybe they are getting funding from the NRA or Planned Parenthood, some are more reliable than others.
“You have to fact-check the fact-checkers.” Conaway advises tracing these fact-checkers back to the source to see if you can rely on their warnings. That can be difficult for the average site user to determine, but he offers a few tips. “What you should look at is the process. Are they being very transparent about the process where they arrived at that information? Are there any kind of logical fallacies in their reasoning? Is there anything they’re leaving out?” At the end of the day, the decision to believe or not falls to each individual user. But how do these fact-checkers go about identifying misinformation in the first place? The same way that everything on social media is handled, through an algorithm.
“That’s the root of the problem,” Conaway says, “The root of the problem is that mistruths spread faster than truths online.” The algorithm decides what you see on your wall, your timeline, your For You Page. Your feed on any social media site is determined by the algorithm.
“It’s not geared to show us the truth, it’s geared to show us things that will keep us on the platform longer,” Conaway explains. So if the algorithm is designed to get views, how do sites hope to flag or takedown misinformation? The answer appears to be through self-reporting on behalf of the users. The platforms are trusting users to report misinformation themselves when they see it. The problem with that model? The users have to be able to identify it, and given how prevalent misinformation is on social media sites across the board, it would seem that most users have far from refined that skill.
If and when the information is reported by a user, it doesn’t immediately get flagged as misinformation or taken down, the platform looks at it and decides how to proceed. “Can you trust the mechanisms of a social media company to work as well as the mechanisms of an editorial decision-making body of journalists?” Conaway asks, “So far no…in the end they are private companies.”
The companies themselves are not always trustworthy, or at least their practices are insufficient, the fact-checkers could be in someone’s pocket and individual users are poorly trained to identify truth from mistruth on their own. “You can’t fault people when they fall for things,” Conaway says. “There are brilliant people, the most educated people in the country who fall for misinformation every day.” So who is both trustworthy enough and educated enough to separate the fake from the fact online? Enter journalists.
Trust in journalists has seen better days. A 2021 Gallup poll shows trust in Mass Media, with only 36% of polled Americans reporting a great deal or fair amount of trust. While that number is low, there is room for improvement. A 2020 poll from the Pew Research Center shows that there are two major factors that boost public trust in news media: corrections to stories and connectedness to the community. While overall trust is low, another poll from 2019 indicates that 81% of Americans say that their local journalists do at least fairly well. So trust may not be as low as it seems.
If the job of a journalist is to inform, that has to include fighting what’s false in addition to promoting what is true. If a reporter were to interview a local politician who lied about the facts in an interview, they would be expected to do the leg work and dispute those claims, even if it had to come later after extensive research. In a time where journalists are expected to have a heavy presence on social media, the same should be expected of them there.
And even if the public chooses not to trust journalists in general, their jobs would be on the line if they serve as online fact-checkers. If journalists agreed to take on this crusade against misinformation, it would become the world’s most public tightrope walk. All of those wishing to see a particular journalist or agency fall would relentlessly fact-check each claim made in the hopes of finding a mistake they could publicize. In the event that happens, that journalist or agency’s reputation would be instantly tarnished. The public and journalists would be holding one another accountable.
Not only would journalists adding “misinformation fighter” to the job description help to slow the spread of fake news online, but it could also just be the answer to America’s crisis of public trust in journalists.