I’ve become increasingly concerned at how much my fellow humans have seemingly adopted and accepted artificial intelligence programs that emulate human creativity and output. It’s here, they say collectively. There’s no stopping it, so we might as well play around with the technology and have fun. We now have programs that can write lyrics, poems and essays, churn out songs, emulate famous singing voices and create photography and artwork that so closely resembles manmade projects that many people can’t tell the authentic works from the rendered ones.
Indeed, German artist Boris Eldagsen fooled judges when he submitted an AI-generated image to the Sony world photography awards and later admitted the picture was not a real photograph.
And a band named AISIS recently wrote a record’s worth of songs in the manner of real British rock band, Oasis, using a computer generated voice of singer, Liam Gallagher. Since I’ve been an Oasis fan since the early 1990s, I could definitely tell a difference between the computer voice and Gallagher’s, but the singer himself said the project was “mad as fuck” (whatever that means), and he said that he sounded “mega” on the recording. I guess that means “good.”
While AI-generated artwork, poetry and music is in its infancy, the music industry has been using computers to “fix” defects in live vocal and instrumental performances for the last two decades, starting with the advent of Auto-Tune in 1996, first made famous by Cher’s 1998 song, “Believe.” Starting in the early 2000s, music producers have used a tool called quantization to “line up” drum hits and musical notes along a grid so that the instrumentation perfectly matches the beat in rhythm. Used too heavily, Auto-Tune can make vocal performances sound robotic or otherworldy; even used conservatively, it gives voices a bizarre-sounding “sheen” that does not exist naturally. Likewise, quantization takes the nuance out of live instrumentation. When used together, as is almost always the case in studio recordings this day and age, the music comes out sounding too perfect, too sterile, too sanitized.
Modern music production tools used in the last couple decades aren’t exactly AI, but they prefigured what we are seeing today: human creativity and achievement either being improved or replaced by AI. Chat GPT can generate high school level essays and poems on nearly any topic imaginable. Programs like Midjourney and others have the ability to render extremely detailed and fantastical landscapes or “portraits” of celebrities. And elsewhere in the AI-sphere, pop songs imitating the voices of Drake and The Weeknd can be fashioned out of nothing more than prompts and code. One of the songs in question, “Heart On My Sleeve” — one struggles to imagine a less imaginative song title — fooled millions of listeners and was eventually removed from all streaming services by Universal Music Group when word spread that it was a fake.
For now, humans are still behind the wheel of all this faux-creativity, but in the future, given the rather loaded implications of artificial intelligence, this will surely not always be the case.
As a musician, songwriter and a fan since before digital music production when every vocal performance heard on the radio came from a natural recording — vocalists simply stood in the booth and sang their parts until they got it right — I am particularly interested in the use of computers in music because it’s my contention that even before AI veered us closer to the precipice, something valuable had already been lost.
The mainstream public often can’t tell when a song is excessively autotuned because of more than two decades of conditioning, or, listeners just don’t care whether it was or not. In general, so long as there is a beat — apparently any beat, no matter how much the same beat was used in countless other songs — an uber repetitive melody and vapid lyrics, the public will happily consume it. And now, it is nearly impossible to find a studio recording, in any genre, that isn’t quantized to the hilt and soaked in Auto-Tune.
Further, because many, if not most, mainstream pop songs use very simple, repetitive melodies and beats, people can’t tell the difference between manmade and computer-made songs either.
We teeter at the brink of a fully deceptive world, where truth, creativity and authenticity crumbles and we can no longer trust our senses.
“In the age of AI Oasis, there’s no point being ordinary,” NME
This quote was a rare moment of self-awareness in an article that I thought was otherwise severely short-sided in its view that, while AI may be able to make pop music that is at least as good as its human counterparts and may even take over the streaming industry, there will always be space for manmade musical innovation.
Writer Mark Beaumont imagined a few pathways toward human flourishing in this area. Volume-based streaming services would either become a very large collection of bland human and computer generated pop, catering to people who don’t care which is which, and the “real” songwriters would be free to rise above and make better music:
The established platforms, then, could shrug, tacitly embrace the fact that their sites have become a hyper-speed circle-jerk of robots making music for robots to listen to and eye up their fifth superyacht. If most humans decide they’re just as happy listening to AI music as human music then the streaming dream will have fulfilled its foundational purpose to provide a truly limitless source of cheap, characterless background muzak ringing out across every night bus in the land.
Another potential scenario in this new landscape, according to Beaumont, is that listeners might grow weary of AI content, but if users already can’t tell the difference between computer generated music and human-created works, I find this option to be implausible. Alternatively, record labels might eventually give “preferential treatment” to real artists. I would hope so, otherwise the music industry as we know it would cease to exist.
Beaumont’s rosy grand finale:
In either scenario, one thing actually rises in value: human creativity, and all the inventiveness, imagination, unpredictability and star power it entails. …
If Spotify goes full-on AI, alternative platforms will spring up championing nothing but human music, where the most innovative artists reimagining what music can be will flourish above more formulaic fare that computers are doing better elsewhere. …
Only the most visionary will survive. Music is about to enter a magnificent new phase of man versus machine – it’s time to blow their hive-minds.
While admirable, the optimism here is misplaced and premature.
Judging by how accepting, acquiescent and complacent everyone seems to be about AI, in a man versus machine scenario, the machines — and the machine — will most likely win, and there isn’t a scenario, financially or creatively, in which humans come out on top.
Creativity wont pay in an ai world, if it can be knocked out in cheap mass production line fashion by (effectively) robots. As time moves on the human input level required to create these things will get less and less too. It will be pushed by the execs at top as it will mean less outlaying on labour an maximising profits, which is basically all ai will ultimately benefit… top end profit!
Thomas Hodge, Facebook comment on the NME article
And as far as creativity itself, if AI is currently able to pull off assembly line pop music as well or better than actual human creators of said pop, who’s to say it won’t eventually be able to replicate music on the level of “Dark Side of the Moon,” “Mellon Collie and the Infinite Sadness” “Are You Experienced” or Bach’s Brandenburg Concertos?
How does human creativity rise in value if AI becomes capable — and it will — of being just as innovative and inventive as we are? The Beatles, fully human as they were, created new genres of music. Who’s to say AI won’t also fashion new genres of music and push the boundaries harder and faster than humans, in all of our tinkering slowness, ever could?
I worry for our creative future, especially when so few people, hardly anyone, as far as I can tell, is voicing the kinds of concerns I’m raising here. It is true that so far, AI currently needs human beings to input prompts and to tell it what to do, but this will surely not always be the case. And what then? Self-sustaining AI uploading its own music to the streaming services or its own rendered artwork or photography to galleries? Picasso V6.1 Build 10.4.874040a becoming the first AI program to get a plaque in the Louvre or MOMA alongside the greatest human pieces of all time? It’s all light, fun and games now, but this slope is slippery and steep, and it’s probably already too late to pull back the reins. I have a grim feeling that AI will win, and in our acquiescence, we’ll let it.