This post was originally published on this site
Artificial intelligence (AI), a current hot topic, has been around for some time. Think about functions including spell check, autocorrect, dictating text messages or even voice assistants including Amazon’s Alexa and Apple’s Siri. In recent years media companies have been using AI applications to: create written transcripts of stories (both audio and video); produce closed captioning and subtitles; generate metadata for digital assets; to write routine stories such as earnings reports for small companies or news stories about local sporting events; and even to power chatbots to answer routine customer questions or to automate rote accounting department functions.
It’s the evolution to generative AI that is really the cause for concern, or at least discussion. These are programs that can spit out seemingly informed answers to questions.
In February, not long after the free test version of ChatGPT-3 was released, I wrote a column about these chatbots and their potential downsides for media. It turns out that I barely scratched the surface. As I wrote then, the key thing to know about these programs is that they’ve been trained by being fed hundreds of billions of words and images. The biggest ones including OpenAI, which powers ChatGPT, have been very secretive about the specific data ingested by their programs.
What I now understand is that the real secret to using generative AI programs is in writing the query, the request for output. The more specific the query, the more