This post was originally published on this site
A new study into generative AI in music production shows that machine learning tools, while promising, lag human composers in creating music with emotional impact. Conducted by SoundOut, a provider of music testing, and Stephen Arnold Music (SAM), a sonic branding specialist, the study compared music produced by Gen AI with human composed music and found that humans hold the edge for emotional accuracy and appeal especially in producing music for brands.
The study also examined GenAI’s usefulness as a collaborative tool, testing the effectiveness of music produced by human composers working in tandem with GenAI. It concluded that GenAI can be useful in generating ideas and initial compositions, but refinement and emotional content is best left to professional composers.
“While humans outperform AI on the emotional front, this study reveals that AI ‘composing by numbers’ is not far behind,” says SoundOut CEO David Courtier Dutton. “AI is not bad at creating emotionally appealing music, but humans are better.”
“Compelling music is more than its composition, it also involves the careful consideration of instrumentation, performance, mixing and editing,” says Chad Cook, SAM president and creative director. “While AI does not currently meet the quality standards for today’s marketing and branding needs, there are benefits to human/AI music collaboration.”
The study involved two tests. The first focused on the emotional accuracy of AI-produced music. A GenAI music production platform responded to four briefs, such as those used by brands seeking music for ads and other media, spanning emotions such as Sentimental/Compassionate,