Generative AI is a scourge on the art world

Eliyahu Gasson | Opinions Editor

To celebrate the 50th birthday of their album Dark Side of the Moon, Pink Floyd held a competition to create new music videos for each of the record’s 10 tracks, with one winner selected for each song. However, the band is facing backlash over one of the winners.

The winner for the song “Any Color You Like” was an AI generated video by 3D artist Damián Gaume. The announcement caused outrage among fans, and rightfully so.

One Reddit user on r/pinkfloyd, u/kosmikmonki, said, “this is absolutely awful in every respect. AI generated morph stock art sequence. It’s a huge pity that this is the winner. There is no worthy prize-winning skill that contributed to this whatsoever.”

Gaume’s victory in Pink Floyd’s contest is the perfect example of the problem that is AI generated art. For some bizarre reason, Gaume’s lazily produced “art” was chosen to win a $100,000 prize, while hand-drawn animations, like the submission by online animator Yosh for the song “Eclipse,” got shafted by the judges.

Art is hard. It takes years of hard work and dedication to hone any craft, let alone animation.

Animators and cartoonists need to put a lot of time, effort and thought into perfecting their character designs, backgrounds, colors and motion. Not to mention the amount of time that goes into producing an animation.

Yosh’s submission runs for a brief two minutes and 13 seconds. The average animations are made up of 24 frames flashing on the screen for one second each. An animation like Yosh’s is made up of 2,893 individual paintings, all of which needed to be drawn, colored and shaded by hand.

Gaume’s submission was made using prompts in a computer program called Stable Diffusion.

In his defense, Gaume didn’t train his instance of Stable Diffusion off of other people’s pre-existing art, at least according to a statement he made in a behind the scenes video on Pink Floyd’s X account.

“I used Stable Diffusion installed locally, so I didn’t use it online,” he said in the video. “I trained my own models and I used Blender to create something. Some trigger images, so I could get some fresh ideas from the AI.”

All this means is that Gaume created some 3D images on his computer, fed it into his copy of Stable Diffusion, told the AI what he wanted to see it do with his images and submitted the result to a contest alongside visually intricate and hand drawn animations.

In total, Gaume entered six music videos into the contest, which is rather indicative of the little amount of time it must have taken him to produce said videos.

This is not to say Gaume is not a talented artist. That much is made evident on his Instagram art account, @marblemaniki. The problem in this case is what it means for artists work in the future.

Generative AI programs like Stable Diffusion, Midjourney and Open AI’s Dall-E, if used online, pull from works created by real people and synthesize them into strange images and videos. Everything generative AI produces is derivative of works produced by talented and hardworking people.

What’s more worrying than Gaume’s submissions are people who attempt to pass the products that generative AI produces for them off as their own original “art.”

Sure, what Gaume did in the Pink Floyd contest was lazy — a spit in the face of the animators who dedicated hours to their artwork — but at least he fed his original works into the software that generated his videos.

One can’t make the claim that Gaume is a plagiarist, but one could say that about many other “AI artists” sharing their “art” online.

Lazy and talentless people have finally found a way to visually express themselves via prompts, and all they need to do is steal from other hard working individuals.

At the moment, AI generated visuals aren’t all that good. 3D characters look like waxy, glitchy caricatures come to life.

But it was only a year or two ago that the machines had a hard time drawing realistic hands and teeth.

As time passes and generative AI systems become better, AI-generated visuals will become indistinguishable from art produced by real people. And here lies the ethical dilemma about which greater action is needed.

This issue requires legislation, it’s certainly not something that the industry is going to self-regulate. And there’s precedent here that should make the decision easier for legislators.

In 2011, photographer David Slater’s camera was snatched by a crested black macaque named Naruto during a wildlife field trip in Indonesia. The monkey turned the camera on himself, smiled and by a stroke of goofy dumb luck, snapped a selfie.

The photo sparked a debate over copyright when the picture went viral. Wikimedia Commons and the blog Techdirt, hosted the images despite objections from Slater that he owned the copyright. The People for Ethical Treatment of Animals (PETA), held that the copyright for the image belonged to the monkey.

Following a series of legal disputes and attempted settlements, the Court of Appeals for the Ninth Circuit ruled animals cannot legally hold copyrights.

The court also ruled that there was no copyright on the photo, despite the fact that it was taken on Slater’s equipment.

Slater too doesn’t own the copyright because he didn’t take the photo, and works must be created by a human to be copyrighted.

So there’s our precedent. Original works must be created by people to be copyrightable.

AI generated works are not made by people, they’re synthesized by machines using other people’s copyrighted work. If anything, AI generated works are infringing on artists copyrights.