ChatGPT-4: new version has old problems

A OpenAI announced this week the update of ChatGPT. Now, users who subscribe to the tool can now use GPT-4, which has a series of new functions, including the use of images and a larger database. However, despite being impressive, the OpenAI stated that GPT-4 can still be flawed and can “hallucinate” complex concepts. The company's recommendation is not to use the platform in high-risk contexts.



In a statement issued by the company, it was exposed that GPT-4 is not completely reliable. According to its own OpenAI, GPT-4 suffers from reasoning problems, is still very naive and fails on difficult problems. This further reinforces the need for humans to handle the machine with attention and caution. 

ADVERTISING

“As AI systems become more prevalent, achieving high degrees of reliability in these interventions will become increasingly critical. For now, it is essential to complement these limitations with deployment-time security techniques, such as abuse monitoring,” the company said.

Exam result (OpenAI translated with Google translator)

GPT-4 is efficient but problems persist

Despite admitting internal system problems, the company highlighted that the tool's performance is efficient. In addition OpenAI hired a series of professionals to use AI critically and send feedback for system improvements. 

Exam result (OpenAI translated with Google translator)

According to internal tests, the ChatGPT-4 is 40% better than the last version. A OpenAI said it decreased the AI's tendency to respond to content-sensitive commands by 82% compared to GPT-3.5. In technical or human-harmful cases, such as medical requests, self-harm or confidential topics, the tool improved its performance by 29%. In this way, they reduced the chances of using Chat in a harmful way.

Read more news about ChatGPT:

Scroll up