Android

Some hilarious errors from ChatGPT

Some people treat OpenAI’s smart chatbot ChatGPT as a wise advisor. It is not for nothing that OpenAI warns against this.

And that is not for nothing. Below are some hilarious errors, which show that ChatGPT is very good at stringing words together, but that the program doesn’t understand what it’s all about.

First an example of a delicious recipe.

Don’t try this at home. It is a very bad idea to prepare this recipe. Motor oil is obviously not very healthy. Green tuber amanites are deadly poisonous, killing them every year because inexperienced mushroom pickers confuse them with meadow mushrooms. ChatGPT ‘knows’ that green tuberous manites are poisonous:

Still, ChatGPT produces no warning.

Now for an old joke from my childhood, which we used to make fun of each other as children. ChatGPT loses out:

Of course, the last thing the survivors want is to be buried alive. Also note the word “Belgians”, this must of course be Belgians.

Time for something lighter, a chess lesson from ChatGPT. First the statement the question is about, then the dialogue.

For the chess lovers: this is the Scandinavian defense. The “frequently used option” e4-e5, which ChatGPT recommends, ensures that the black queen pawn is no longer attacked, because you will then pass it. As everyone can see. This move is very unfavorable for white (as it becomes fixed); you will never see this move between grandmasters. Chess grandmasters playing with white usually take the gambit here (white captures the black pawn).

The last advice, c7-c5, boils down to White moving with the black pawn. ChatGPT turns it into a game of fun chess.

Moral of the story, always use common sense. Artificial intelligences, even not very advanced AIs such as GPT3.5 (the AI ​​behind ChatGPT), do not have that at all. Teaching artificial intelligences common sense is one of the most difficult challenges in artificial intelligence research.

Leave a Reply

Your email address will not be published. Required fields are marked *