Uncategorized

Overwhelmed by AI? We have a solution ready

When it comes to delivering texts or guest contributions, I’m the type of person who likes to take my time. I often push deadlines to the limit. So it feels all the more unusual for me to still have the impression while writing this column that reality will overtake me in the next few hours. It seems to me that this is not the only way to go. But let’s start at the beginning.

The starting point of this column are the political guidelines that are currently being discussed in various places and are to be defined in the coming weeks. On the one hand there is the digital strategy of the federal government, which is actually not one – as we now know since the day-long meeting of the coalition committee. The original ten billion euros in subsidies for young companies by the end of the legislative period have now shrunk to a proportion that is apparently not worth mentioning in the coalition compromise.

I can report from experience that funding often ties up resources instead of allowing them to develop. You get the impression that you have to be able to afford funding. With all optimism: I do not assume that this will work better at the federal level in the short term.




Critical voices on the AI ​​Act

On the other hand, the AI ​​act is brought to the finish line – accompanied by critical voices, which I sometimes joined in. Because: The broad debate, which rightly takes into account people’s fears and reservations with regard to the use of AI technologies, unfortunately does not currently differentiate according to the sectors in which AI should be used. How quickly one can be overtaken by reality in these debates (I briefly remind you of the concern I expressed at the beginning) is shown in the recently published 300-page report opinion of the ethics council.

This paper took more than two years to prepare. Among other things, it states that the Ethics Council recommends further development of the rules for online platforms in the area of ​​public communication and opinion-forming with regard to the selection and moderation of content as well as personalized advertising and data trading. In addition, the Ethics Council considers control and responsibility to be possible even without complete transparency, namely when responsibility for the use of corresponding systems is assigned to manufacturers and users who, in case of doubt, would have to prove why a lack of transparency is acceptable.

You don’t have to look at ChatGPT or Midjourney and the examples currently circulating to ask yourself what actually happens when the “wrong side” handles AI. That is also what the AI ​​Act should be about at EU level: identifying the polluters, liability issues and the justiciability of “AI damage”. And yes, these are important questions when it comes to building trust in meaningful applications. Last but not least, Europol also comes into play here, warning against the misuse of AI-based text automation with regard to phishing or disinformation campaigns.

This makes it clear that rules are needed! However, how this can work if the feeling remains of being overtaken by reality almost every week remains open. We all know how slowly the mills turn at European level. And from the point of view of an AI innovator, I can say that the concern that everything is being lumped together remains.

Perhaps it is therefore actually right to simply pause for a moment, as Musk, Wozniak, Harari and several hundred signatories are demanding with their call for a six-month moratorium – i.e. a development stop. Only: how should China be obliged to do this? The country, of all places, that has put the development of AI on its own political (communist) agenda and here – in contrast to almost all other areas of life – has declared maximum freedom for developers? There will be no answer to that either.




Transparency is the solution

We are at a crossroads. As people, as businesses, as a global society, we have to decide what we want and where and how AI should help us to overcome the big problems and small obstacles in everyday life. In contrast to the Ethics Council, I am of the opinion that transparency will – as is so often the case – help.

The question “Cui bono?” – who benefits? – is a very important factor. The more transparently this question can be answered, the easier it will be to decide whether AI is actually a useful help. If even a not insignificant part of the supposed AI drivers are now in danger of losing track, then it is indeed time for a set of rules that curbs abuse without impeding progress. It remains to be hoped that the discussion and adoption of the AI ​​Act will not be overtaken by reality to the left and right.

Almost finished!

Please click on the link in the confirmation email to complete your registration.

Would you like more information about the newsletter? Find out more now

Leave a Reply

Your email address will not be published. Required fields are marked *