‘We don’t know what’s real anymore’
In the distant mid to late 2022, it was still so easy to recognize images generated by “AI”. Just a matter of counting fingers! But developments are going fast. You have to be more and more careful not to be fooled by “AI”. Often, the product of a so-called artificial intelligence clearly indicates that this is the case. However, that is obviously not a watertight or universally applied standard. It is not for nothing that companies are working hard to develop software that can recognize “AI” generated media.
Contents
“What we have seen is just the tip of the iceberg.”
In addition to the visual craziness, there are still plenty of ways in which “AI” falls through the basket for the time being. “AI” does not do what is written on the tin, it does not think. “AI” simply calculates the most likely next move based on training data. The fully “AI” generated Seinfeld clone on Twitch is a good example of this. Both the plot and the dialogues are completely non-sequitur. In one respect, the series was reminiscent of the real world, it was immensely popular until “AI” Jerry Seinfeld turned out to be transphobic.
A recent image showing a burning Pentagon, or at least a fire near the Pentagon, was also quickly revealed as fake despite frequent sharing by ‘Blue-checkmarks’. But according to Truepic CEO Jeffrey McGregor, it’s all just the beginning. According to him, the “AI” generated share of content shared on social media will increase sharply. Something we are simply not prepared for, he says.
McGregor’s company wants to tackle this problem. With their Truepic Lens, Truepic claims to be able to label media during production. This data, including the date, time, location and the device used, is recorded in the media file with a kind of digital signature. This can be read at all times and will indicate, if requested, whether the file has been created organically or with the help of “AI”.
“When everything can be faked…”
“Then everything you see can also be fake.” It’s a simple truth, but McGregor does have a point in bringing it up. A human society runs on trust. Fortunately, for now we mainly live in the real world (assuming that the Matrix is not a documentary) where your eyes are not just fooled. But the current proliferation of “AI” content will irrevocably lead to distrust as it becomes more prevalent. People who really have something to report can dismiss incriminating material as fake, and prove otherwise. On the other hand, those who are innocent can suffer from false claims that are supported by “AI” material.
According to McGregor, it is therefore explicitly Truepic’s mission to counter online disinformation. According to their own words, the company therefore receives a lot of attention from, for example, NGOs, media companies and insurance companies, which of course all have an interest when it comes to finding out the truth behind a claim.
Prevent or cure?
There is a real arms race in the making. One that could have serious consequences in the near future. Companies like Truepic are committed to stopping disinformation at the source. If it is up to them, all media will be provided with a label that clearly indicates whether the piece of media is made with “AI” or not. In this way, online abuse of “AI” content is made impossible. Well, everyone has to participate.
The Coalition for Content Provenance and Authenticity (C2PA) is committed to this. Since its foundation in 2021 with the amalgamation of two similar projects from Adobe and Microsoft, the coalition has been involved in drawing up guidelines and developing specialist tools. These tools allow content creators to watermark their creations. In this way, consumers of media with such a watermark know where they stand. However, the cooperation of content creators is required.
And so such an approach is insufficient. As the Pentagon image made clear, there are people who, in the least harmful case, don’t care if an “AI” generated image causes unrest. It will be essential in the future to be able to test any piece of media at all times. There are companies that offer such software, but they are not 100% accurate.
Not waterproof at the moment
Given the rapidly growing number of companies that use “AI” technology in one way or another, keeping up with the latest developments is difficult. “This is about impact mitigation, not elimination,” Hany Farid, a digital forensics expert at the University of California, Berkeley, told CNN.
Open AI, developer of Dall-E and ChatGPT, also admitted earlier this year that their own efforts to develop technology that can recognize “AI” content are far from perfect. Several companies and governments indicate that steps must be taken quickly to stay ahead of the developments in the “AI” field. So it is certainly being worked on, but is it enough? Despite the recent letter, the developments around “AI” only seem to be accelerating. Until a solid way to recognize “AI” generated content is on the table, it is probably wise to take content shared on social media with a grain of salt. Insofar as we didn’t already.