ChatGPT he is good in humanities but slips in exact sciences; understand the test carried out with the Enem exam

According to a test carried out by DeltaFolha, the ChatGPT, from OpenAI, he did well on the Enem test, only slipping in the exact exams. AI obtained an average score of 612,3 in the Enem objective tests, surpassing 98,9% of students in human sciences and 95,3% in languages ​​and codes.

The analysis considered the score of each course and showed that artificial intelligence would perform well in the Enem, but its performance in mathematics was considered low, obtaining an average of 443,1 points, below the 527,1 average of human candidates. On the other hand, the human sciences, curiously, were taken by technology. In the simulation, the AI ​​average was 725,3, higher than the 523,3 points of real competitors.

ADVERTISING

For 70% of people in the United States, ChatGPT is reliable, research shows
For 70% of people in the United States, ChatGPT is reliable, research shows

Methodology used Enem tests from the last five years

The assessment of the ChatGPT was based on AI responses from tests taken over the last five years, answering 1.290 questions. The methodology used in the test was Item Response Theory. This mathematical model adopted by Enem predicts items calibrated according to parameters of discrimination, difficulty and probability of a random hit, as narrated by DeltaFolha. 

A Sheet calculated the final grade of the ChatGPT, using Inep's standard analysis, in which the machine answered each question only once, without previous examples, indicating the alternative it considered correct.

Although he excelled in human sciences and languages ​​and codes, the ChatGPT presented low performance in mathematics, which could be an obstacle to entry into popular courses at the country's main federal universities. Even so, AI obtained an average score of 608,7 on the Enem, better than that obtained by 79% of students that year, when added to the essay grade.

ADVERTISING

Read also

Scroll up