Artificial intelligence is gradually reshaping how local television stations operate—but many newsroom leaders say the technology’s limitations and ethical risks mean it must be handled with care.
According to the latest RTDNA/Newhouse School at Syracuse University Survey, 32.6% of TV news directors say they’re doing something with AI, a modest increase from 26.6% last year. But what “doing something” means varies widely—ranging from basic transcription to exploring tools for content translation and SEO.
Ty Takahashi, news director at WJTV in Jackson, Mississippi, summed up the sentiment regarding AI products shared by many broadcasters: “They’re not real people. They’re aggregators, basically formulating answers and performances based on what human beings have collectively thrown out there… so you always have to double-check the work.”
He shared his own experiment with using AI-generated content on his LinkedIn profile. “I picked up at least four or five lies on that,” he said.
The RTDNA/Newhouse survey noted that AI use is more common in larger markets, with 42.9% of stations in the biggest markets reporting some level of adoption, while just 22.9% of stations in the smallest markets are using or testing AI tools.
Tiffany Shannon, at WAPT in Jackson, Mississippi, raised the issue of “AI hallucinations”—when the technology outputs believable but incorrect information. “It’s trying to make you happy… but credibility is hard to earn back once you’ve reported something that isn’t true.”
General Manager Chris Larum of WGBC in Meridian, Mississippi, reinforced the idea that AI should support—not supplant—human creativity. “We’ve said that AI can be a great tool, but it should be a tool, not a crutch.”
That aligns with national patterns. The RTDNA/Newhouse data show that many stations are still in the investigative phase, assessing AI’s potential rather than fully deploying it. Some even reported outright bans, often at the corporate level.
Mallory Bickel, digital media director at Mississippi Radio Group, emphasized that AI’s creative functions—like headline or phrase suggestions—shouldn’t be confused with journalistic rigor. “It’s great to be used for entertaining or creativity… but not to be used for actual fact or information,” she said.
Even among those using AI, the application is often behind the scenes. The survey lists common newsroom uses such as closed captioning, headline generation, transcription, proofreading, and even Spanish audio dubbing for bilingual broadcasts.
Shannon and Larum both shared how AI is gaining traction on the sales side. “We use it to come up with ideas—like how to get through the door with a potential client,” Shannon said. “But you have to put the human element on it. If you don’t, it’s not going to help you at all.”
Laraum praised the AI video platform Waymark for quickly producing spec spots. “You can get a spot made for you in less than five minutes,” he said, adding that while useful, “you lose that personality if you rely on a computer for everything.”
Shannon also highlighted time-saving tools in audio production. “We’ve had voiceovers mispronounce words—we can ask AI to mimic the voice and re-say it… it sounds so seamless you’d never know there was an edit.”
Still, across markets and station sizes, caution remains the rule. Many AI workflows require human oversight, and newsroom leaders warn against letting automation override journalistic judgment.
“It’s a valuable tool. It’s meant to make our lives easier,” Takahashi said. “But it’s not meant to replace us.”