Facebook announced on Thursday that it would expand a fact-checking program to its Instagram image-sharing service. Instagram users in the US can now report content they believe is false, but it’s not clear that the system, which is already overwhelmed, can handle more suspect information.
“Facebook did not ever scale the fact-checking program on Facebook to be able to reach all users and all information on Facebook,” says Robyn Caplan, a media and information policy scholar at Rutgers who studies social media governance. “I’m not quite certain how they’re going to scale to Instagram effectively.”
Instagram was once the land of golden filters, where positivity reigned supreme. More recently, though, the platform has fallen victim to the same hate speech, bullying, and misinformation that plagues just about every social media site. Systems that can respect free speech, and sensitively address complicated and culturally inflected conversations, at Instagram’s monstrous and growing scale, have proved elusive.
Facebook began its fact-checking initiative in the wake of the 2016 election. When users see content they think is suspicious or misleading, they can flag it. If posts are repeatedly flagged, Facebook sends them to fact checkers at organizations like PolitiFact, the Associated Press, and Factcheck.org. Those fact checkers aren’t obligated to review content, but they can choose the posts they think are the most important or impactful to evaluate. On Instagram, posts that are deemed false aren’t taken down, but they are removed from the site’s Explore and hashtag pages, which Stephanie Otway, a