In the future, AI systems will increasingly support us in everyday life. This is no wonder, because some tasks are often taken on more reliably and quickly by a machine. It is therefore of immense importance that such systems are secure and difficult to manipulate. After all, nobody wants their own safety to be at stake if Artificial Intelligence suddenly makes wrong decisions.
A system that still has to struggle with some errors is CLIP. CLIP is still in development and is being developed by OpenAI, a company that has already gained some experience in the field of AI. The bow allows any object to be transformed into another in a matter of seconds. But see for yourself:
The reason for this development is the approach that CLIP uses. The system should learn to identify objects independently. For this purpose, a database with 400 million images is used, in this way “multimodal neurons” should be created. According to the manufacturer, these are individual components of the neural network that recognize not only images, but also sketches, cartoons and the associated text.
In this way, the algorithm could learn, for example, to interpret certain content and not just identify objects. In the future, this can lead to even better artificial intelligence. But there is still a long way to go, the current state of development is turning a chainsaw into a piggy bank, for example, and dollar signs are misleading the system.
As with other systems, prejudices are also a major problem with CLIP. In this algorithm, certain words are associated with certain properties. The Middle East, for example, is associated with terrorism, and dark-skinned people are associated with gorillas. The researchers still have to work on this.
As well as AI systems do our work for us today, they can still be easily misled in many ways. It remains a major challenge to develop algorithms that are fair, intelligent and efficient at the same time. Once we get there, we will live an easier life in the future.
Via The Verge