Can AI be a “sword for truth?” That question was at the center of a debate during Voices: The European Festival for Journalism and Media Literacy.
One of the primary concerns about AI in journalism is its capacity to generate convincing but false information. Philosopher Giovanni Spitale shared research that suggests AI-created misinformation can be even more believable than organic misinformation. Deepfake technology has exacerbated this issue, making it easier to manipulate public perception. A recent example occurred during the Slovakian elections when deepfake audio surfaced, falsely implicating a candidate in election rigging. Because this happened during a mandatory pre-election silence period, the candidate had no opportunity to refute the claims, highlighting AI’s potential to undermine democratic processes.
Beyond deepfakes, AI’s unpredictable nature adds to the complexity of regulating misinformation. Large-scale LLMs operate with billions of parameters, often producing outputs that even their creators struggle to fully explain. While AI can mimic human intelligence, its responses remain largely unpredictable, making it difficult to control and regulate its impact on journalism.
Despite these challenges, Spitale says AI also has the potential to be a powerful ally in the fight against misinformation. Automated fact-checking tools can assist journalists in identifying fake news, tracing sources, and verifying content in real-time. AI-driven systems can analyze inconsistencies in social media posts, detect deepfake videos, and flag misleading information before it spreads.
Spitale also offered evidence that AI-generated dialogues can even help reduce conspiracy beliefs, indicating that AI has a role to play in mitigating misinformation. Furthermore, AI’s ability to process vast amounts of data enables it to uncover trends in disinformation, track coordinated inauthentic behavior, and expose propaganda efforts. These capabilities can help journalists safeguard the integrity of the news industry and strengthen media credibility.
The Ethics of AI in Journalism
Belen Lopez Garrido, Editorial Manager at the European Broadcasting Union, emphasized that trust is paramount in journalism. She advocated for legal frameworks to regulate AI usage in newsrooms, arguing that establishing clear guidelines would reduce the likelihood of AI-generated misinformation eroding public confidence.
To maintain credibility, the session’s panelists stated that journalists must develop AI literacy and critical thinking skills. In the Netherlands, student journalists are already being trained to analyze digital content, examining its origins and authenticity. Such initiatives could be expanded globally to equip the next generation of journalists with the tools needed to discern AI-generated misinformation from legitimate reporting.
Belen said journalists must also be mindful of AI overuse. While AI can enhance efficiency, excessive reliance on it could diminish journalistic integrity. Newsrooms should prioritize human oversight, ensuring that AI-generated content meets ethical standards and aligns with journalistic values. Transparency about AI’s role in news production is crucial—audiences should know when and how AI has been used in reporting.
The debate’s key takeaway: As AI becomes a staple in journalism, its impact may depend on how the industry adapts. AI is neither inherently good nor bad—it is a tool, and its consequences rest on how it is wielded. Journalists must embrace AI with caution, responsibility, and an unwavering commitment to truth. By fostering transparency, developing AI literacy, and prioritizing ethical reporting, journalists can ensure that AI serves as an ally rather than a threat to democracy and media integrity.