Android

Duke Nukem publisher shows remorse after fuss about AI images in promo video

Duke Nukem is a franchise that was already on the decline when I started gaming 20 years ago. Yet every now and then a new release comes out that bears the name of the world’s oldest boomer shooter franchise. Usually people don’t pay much attention to it anymore. That was different this time. A brief announcement that some old Duke Nukem games were being re-released for an Evercade handheld system was quickly pilloried.

J’Accuse…!

Because as it turned out, footage used in the promo was clearly AI-generated. The images, now deleted, were painfully generic. But yes, that fits Duke again. Partly because of that fact, attentive viewers immediately sounded the alarm. Add to that the presence of amorphous repetitive structures, bizarre anatomy and a random patch of gray between Duke’s right arm and the firearm he holds in that hand, and you wonder how the image ever got published.

“Hybrid artist” Oskar Manuel soon turned out to be responsible for this crime against humanity. His portfolio consists almost entirely of “AI” generated art. As a company, you cannot claim to have been ignorant of the nature of the “work of art” purchased. The person ultimately responsible for the project probably only looked at the price tags.

Still, Evercade’s CEO made an effort to justify their choice of the “very talented” and “award-winning” artist. However, they admitted that the result of the committee did not meet the expectations and standards demanded by fans. Whatever that may mean.

But why does it matter?

When a company says “we made this” when they have only given a prom to an “AI” it makes a bad impression. In the future, people may simply assume that “AI” played a role in producing all the media they consume. But that day has not yet come, so you can expect a company to be transparent when a product has not been created according to the usual course of events.

But of course it goes further than that. So-called artificial intelligence models that are used to produce visual material are usually referred to as ‘generative AI models’. Because, in a sense, they ‘produce’ visual material. However, these models do so in the same way that I produced the answer to many a test question during my high school career. Namely, by looking. What about copyright and “AI”?

Artists are suing

So-called ‘generative AI models’ do nothing more than consume works of art and shuffle pixels based on that input until the desired output is presented. The whole story is even better because companies such as Stability AI and Midjourney have used artworks without the permission of the artists in question to “train” their calculation models that have fallen apart.

Some companies have been smart enough to think about the consequences of haphazardly gathering a dataset. But more often than not, companies simply looted the catalog from artists or from Getty Images. Not too smart. The result: many lawsuits. A number of proactive artists have gone to court to file a so-called class-action case. A case in which the plaintiff represents a group. And where with a profit everyone from that group can potentially share in the profit.

Not the first

This is not about a few ailing art school dropouts. Kelly McKernan is an artist with a distinctive style who lives from her own work. She sells her works through commissions and her own website. The “generate x in the style of Kelly McKernan” prompt has been entered into “AI” art generators a total of over twelve thousand times. Now, not every one of these prompts will involve a missed sale. It should be clear, however, that this concerns a potentially significant loss of income.

The potential loss goes beyond just reduced revenue from indirect competition. Nvidea’s recent showcase shows that a lot of work may be done by “AI” in the future. A logical consequence is that there is much less room for artists of flesh and blood in the media world. It is not for nothing that members of the writers’ guild in the US are striking to enforce the promise that they will not all be replaced by software in the shortest time.

Hopefully they pay attention in the entertainment world. The use of “AI” in content production is immediately noticeable and the average consumer cannot appreciate it. It is certain that jobs will disappear. But you can hope that for the next try we at least have rules about the copyright of training data and the generated end result.

Leave a Reply

Your email address will not be published. Required fields are marked *