This post was originally published on this site
Try this: Search your inbox for the word “landscape.” Unless you enjoy discussing your gardening with coworkers, you’ll likely find an assortment of AI-generated newsletters and emails stuffed with lazy language phrases like “AI landscape” or “ever-evolving landscape.” For reference, here’s my inbox:
But these clichés aren’t just in your inbox – they’re probably also in your newsroom’s content. AI-generated cliches can appear everywhere from broadcast script summaries and social posts to expert quotes in news stories. As large language model-powered productivity tools become more common in newsrooms, AI’s unfortunate fondness for clichés is also coming along for the ride. Here’s why that matters and what news orgs can do about it.
How Clichés Can Undermine Newsroom Credibility
For news organizations, clichés can be credibility killers. When readers encounter phrases like “buckle up” or “breakneck speed” in news content, it can trigger skepticism because these expressions sound out of place in serious journalism.
Research from Stanford University found that readers could identify AI-generated content with high accuracy, often citing “formulaic language” and “overused phrases” like cliches as reasons to question authenticity. When audiences suspect content is AI-written, trust in a news brand can suffer.
Even if a reporter avoids using AI tools themselves, news subjects and experts may be using AI to generate their quotes. When experts use tools like ChatGPT to create statements, it makes it harder for both newsrooms and audiences to distinguish real expertise from AI-generated commentary. Unfortunately, AI-generated quotes can also contribute to misinformation and reputation sabotage. Recent scandals