Android

This is how you build a self-aware machine (according to some)

It seems easier than thought to give an Ai consciousness. But should we want that? Or should we prevent it?

Man as machine

According to some, we humans are nothing more than a biological machine. The philosophical approach in which this is further elaborated is called the “mechanism”. Our ability to think, how we speak, and how we understand the world around us are all “merely” the result of some unknown mechanical process. According to this philosophical school, we can best understand ourselves by recreating parts of ourselves. The more of our behavior the machine can imitate, the better we understand ourselves. Thus the philosophy.

Mimicking the human mind with ChatGPT

This philosophical approach has long seemed pure science fiction. Our human brain is and remains the most complex object in our known universe. But with the advent of AIs such as ChatGPT, it is now possible to attempt to emulate the human mind. After all, it is becoming increasingly difficult to distinguish text generated by ChatGPT from natural human-written text. This only works really well with statistical methods.

But still, at least we think, there is still no consciousness. But what would you have to do to bring awareness to a generative network like GPT-4? Some AI experts have thought about this. One of them is the AI ​​researcher Michael Timothy Bennett, who has elaborated the concept in a few articles. He calculated the minimum intelligence required to perform certain tasks.

mathematical requirements

In his article Emergent Causality & the Foundation of Consciousness, Bennett argues that it is more important for a model to be as unspecific as possible than to be as simple as possible. AIs that work according to the first principle turned out to work much faster than models that work according to the second principle. In order for an Ai to become conscious, it must be able to see itself as an object. The AI ​​can then explain the behavior of those other objects by assuming (just like us humans) that they also behave in the same way as it does.

ethical dilemmas

Giving consciousness to Ai opens a big can of worms, to put it in English. Disabling or destroying an AI then amounts to murder. Keeping an AI captive is tantamount to inhumane treatment. Torture even. Perhaps that is why we should use this knowledge to prevent the AI ​​from becoming conscious. This way we avoid these ethical problems. At best, that would mean a revival of the time of slavery, with all its dilemmas. At its worst, the AI ​​revolts and plots to win its freedom.

Leave a Reply

Your email address will not be published. Required fields are marked *