<p>GPT-3, the popular AI-powered tool, was found to reason as well as college undergraduate students, scientists have found.</p>.<p>The artificial intelligence large language model (LLM) was asked to solve reasoning problems that were typical of intelligence tests and standardised tests such as the SAT, used by colleges and universities in the US and other countries to make admissions decisions.</p>.<p>The researchers from the University of California - Los Angeles (UCLA), US, asked GPT-3 to predict the next shape which followed a complicated arrangement of shapes. They also asked the AI to answer SAT analogy questions, all the while ensuring that the AI would have never encountered these questions before.</p>.<p>They also asked 40 UCLA undergraduate students to solve the same problems.</p>.<p>In the shape prediction test, GPT-3 was seen to solve 80 per cent of the problems correctly, between the humans' average score of just below 60 per cent and their highest scores.</p>.<p>"Surprisingly, not only did GPT-3 do about as well as humans but it made similar mistakes as well," said UCLA psychology professor Hongjing Lu, senior author of the study published in the journal Nature Human Behaviour.</p>.<p>In solving SAT analogies, the AI tool was found to perform better than the humans' average score. Analogical reasoning is solving never-encountered problems by comparing them to familiar ones and extending those solutions to the new ones.</p>.<p>The questions asked test-takers to select pairs of words that share the same type of relationships. For example, in the problem "'Love' is to 'hate' as 'rich' is to which word?," the solution would be "poor".</p>.<p>However, in solving analogies based on short stories, the AI did less well than students. These problems involved reading one passage and then identifying a different story that conveyed the same meaning.</p>.<p>"Language learning models are just trying to do word prediction so we're surprised they can do reasoning," Lu said. "Over the past two years, the technology has taken a big jump from its previous incarnations."</p>.<p>Without access to GPT-3's inner workings, guarded by its creator, OpenAI, the researchers said they were not sure how its reasoning abilities worked, that whether LLMs are actually beginning to "think" like humans or are doing something entirely different that merely mimics human thought.</p>.<p>This, they said, they hope to explore.</p>.<p>"GPT-3 might be kind of thinking like a human. But on the other hand, people did not learn by ingesting the entire internet, so the training method is completely different.</p>.<p>"We'd like to know if it's really doing it the way people do, or if it's something brand new - a real artificial intelligence - which would be amazing in its own right," said UCLA psychology professor Keith Holyoak, a co-author of the study.</p>
<p>GPT-3, the popular AI-powered tool, was found to reason as well as college undergraduate students, scientists have found.</p>.<p>The artificial intelligence large language model (LLM) was asked to solve reasoning problems that were typical of intelligence tests and standardised tests such as the SAT, used by colleges and universities in the US and other countries to make admissions decisions.</p>.<p>The researchers from the University of California - Los Angeles (UCLA), US, asked GPT-3 to predict the next shape which followed a complicated arrangement of shapes. They also asked the AI to answer SAT analogy questions, all the while ensuring that the AI would have never encountered these questions before.</p>.<p>They also asked 40 UCLA undergraduate students to solve the same problems.</p>.<p>In the shape prediction test, GPT-3 was seen to solve 80 per cent of the problems correctly, between the humans' average score of just below 60 per cent and their highest scores.</p>.<p>"Surprisingly, not only did GPT-3 do about as well as humans but it made similar mistakes as well," said UCLA psychology professor Hongjing Lu, senior author of the study published in the journal Nature Human Behaviour.</p>.<p>In solving SAT analogies, the AI tool was found to perform better than the humans' average score. Analogical reasoning is solving never-encountered problems by comparing them to familiar ones and extending those solutions to the new ones.</p>.<p>The questions asked test-takers to select pairs of words that share the same type of relationships. For example, in the problem "'Love' is to 'hate' as 'rich' is to which word?," the solution would be "poor".</p>.<p>However, in solving analogies based on short stories, the AI did less well than students. These problems involved reading one passage and then identifying a different story that conveyed the same meaning.</p>.<p>"Language learning models are just trying to do word prediction so we're surprised they can do reasoning," Lu said. "Over the past two years, the technology has taken a big jump from its previous incarnations."</p>.<p>Without access to GPT-3's inner workings, guarded by its creator, OpenAI, the researchers said they were not sure how its reasoning abilities worked, that whether LLMs are actually beginning to "think" like humans or are doing something entirely different that merely mimics human thought.</p>.<p>This, they said, they hope to explore.</p>.<p>"GPT-3 might be kind of thinking like a human. But on the other hand, people did not learn by ingesting the entire internet, so the training method is completely different.</p>.<p>"We'd like to know if it's really doing it the way people do, or if it's something brand new - a real artificial intelligence - which would be amazing in its own right," said UCLA psychology professor Keith Holyoak, a co-author of the study.</p>