These areas are particularly affected
After the first hype on the internet, language models like ChatGPT find their way into our everyday work. Tasks previously done by humans are being handed over to AI applications – and a US survey shows how companies are cutting jobs and saving money as a result.
The job board resume builder surveyed a total of 1,000 managers on how to use ChatGPT. 49 percent are already using the chatbot in their company, and 30 percent are at least planning to use it.
99 percent of the companies in which ChatGPT is already used speak of significant savings through the tool. 48 percent want to have saved more than 50,000 dollars, eleven percent even more than 100,000 dollars. The view of the language model is positive, 55 percent describe the performance of CHatGPT as “excellent”.
But what tasks does ChatGPT actually take on in the company? 66 percent of respondents who use the language model use it to write code, 58 percent to compose text content. 57 percent use AI assistance in customer support and 52 percent to take meeting minutes.
All of these are tasks that were previously done by people – and so almost half of the companies (48 percent) that rely on ChatGPT have already replaced employees with the chatbot. 32 percent of those surveyed assume that ChatGPT will “definitely” lead to layoffs in the next five years, 31 percent think it is “probable”.
What was not asked in the study, but could be relevant in the future, are newly created jobs for the use of AI. Because while the tasks mentioned above can actually be taken over by language models like ChatGPT, it also needs someone who takes care of the systems – and keeps an eye on their output. ChatGPT, Bard and Co. are currently demonstrating that it is quite error-prone.
ChatGPT be “incredibly limited” As OpenAI CEO Sam Altman said in December 2022, “relying on it for something important” would be a mistake.
And although Microsoft has meanwhile suggested using the language model in robotics in the future, it strongly advises human supervision: “In view of the tendency of Large Language Models (LLM) to sometimes generate incorrect answers, it is very important […] ensure the security of the code under human supervision before running it on a robot.”