This post was originally published on this site
Recently, leaders at Gray Television conducted a training session to help employees better understand the applications — and pitfalls — of generative artificial intelligence, or “gen AI.” With many of her colleagues watching, Claire Ferguson, the company’s assistant general counsel, put herself in the crosshairs of a gen AI platform.
“I wanted to sort of scare folks, so I asked AI to write a biography about me,” she says. “I gave it a ton of information about me, which I probably shouldn’t have done, but I wanted it to find my digital footprint.”
Ferguson ultimately asked the gen AI platform to craft a bio about her three times. On each occasion she fed it her full name, her birth year and other personal details. The results?
“In two of the three times I asked it to do it, it said I had died in a car accident the year before,” Ferguson says. Though the facts within the text were “just wholesale made up,” she says the copy read with an authoritative, convincing tone.
Such relative horror stories point out why gen AI cannot completely replace reporters who work for reputable newsgroups — at least for now. But that hasn’t stopped the technology from infiltrating newsrooms to smooth out workflows for its human overseers. (This very article was made possible with help from gen AI, which was used to transcribe interviews.)
To address the many worms