Can warnings from fact-checkers reduce sharing?
The misinformation expert Claire Wardle, writing in the current issue of Scientific American, poses (then expertly answers) a key question for people concerned about the current state of the online information ecosystem: Why do people share misinformation, conspiracies and other kinds of misleading content on social media?
(The article is part of a larger package dedicated to “Truth, Lies and Uncertainty.”)
Wardle, who is the U.S. director of First Draft, a nonprofit focused on ways to address misinformation, cites several reasons people create this stuff, many of which will be familiar to readers of this newsletter. People may want political influence, or they are just causing trouble. Some of them do it for money.
As for sharing, one of Wardle’s points is that people’s willingness to “share without thinking” is precisely what the creators of disinformation want. “The goal is that users will use their own social capital to reinforce and give credibility to that original message,” she wrote.
So when it comes to sharing, what can be done to give people pause?
One answer came recently from Paul Mena, a professor of journalism and news writing at the University of California, Santa Barbara. He released new research that provides some affirming news for fact-checkers. Mena concluded that people were less likely to share content on Facebook that includes a fact-checking warning label than stories that are not flagged.
In the experimental design, some posts were labeled as “disputed,” similar to the way Facebook used to tagged posts rated
Read more here: https://www.poynter.org/fact-checking/2019/warnings-from-fact-checkers-could-discourage-people-from-sharing-false-facebook-posts-study-says/