UCSD’s Latest Achievement: OpenAI GPT-4 Successfully Passes Turing Test, With 54% Likelihood of Being Identified as Human

GPT-4 passed the Turing test! Through empirical research, the UCSD research team found that humans cannot distinguish GPT-4 from humans. And, 54% of the time, it was judged to be human.

Can GPT-4 pass the Turing test? When a powerful enough model is born, people often use the Turing test to measure the intelligence of this LLM. Recently, researchers from the Department of Cognitive Science at UCSD discovered:

Advertisement

In the Turing test, people can't tell the difference between GPT-4 and humans!

Paper address: pdf/2405.08007

In the Turing test, GPT-4 was judged to be human 54% of the time.

The experimental results show that this is the first time that a system has been empirically passed the test in the “interactive” two-person Turing test.

Advertisement

ResearcherCameron R.Jones 500 volunteers were recruited, and they were divided into 5 roles: 4 evaluators, namely GPT-4, GPT-3.5, ELIZA and humans. The other role “played” the humans themselves, hidden on the other side of the screen. Awaiting the evaluator's findings.

The following is an excerpt from the game. Can you tell which dialog box is human?

Figure 1: Part of a conversation between a human interrogator (green) and a witness (grey)

In fact, among these four conversations, one was with a human witness, and the rest were with artificial intelligence.

The controlled “Turing test” is launched for the first time

There have been many attempts at a Turing test over the past 74 years, but few controlled experiments.

The famous Loebner Award is an annual competition, but from its first launch in 1990 until 2020, no system has passed the test.

A recent large-scale study found that humans can identify a series of LLMs with 60% accuracy during a 2-minute online conversation.

To date, there has been no “controlled” experimental demonstration of any machine that has been put to the test.

To test whether it is possible for people to see through the deception of current AI systems, the researchers conducted a randomized controlled two-person Turing test using GPT-4.

The test requirements are simple, that is, human participants start a 5-minute conversation with humans and artificial intelligence, and determine whether the interlocutor is a human.

As mentioned at the beginning, the researchers evaluated a total of 3 models – GPT-4, GPT-3.5, and ELIZA.

For the first two LLMs, the researchers adapted the best-performing prompts from GPT-4 and GPT-3.5 exploratory studies to instruct the large model how to respond to messages.

Researchers say a core challenge in interpreting the Turing test is that a system's superior performance may arise from two reasons:

– One is to imitate humans brilliantly

– The second is that the judge is too gullible

The latter situation is often called the ELIZA effect, where participants humanize even simple systems.

The ELIZA system is familiar to everyone.

It was a simple rules-based chatbot from the 1960s, but users found themselves attributing human characteristics to it as well.

Therefore, in the experiment, the researchers used ELIZA as a baseline, on the one hand to empirically quantify the ELIZA effect, and on the other hand to perform operational checks in the experiment.

So, what are the specific results of this Turing test?

GPT-4 pass rate 54%

First, let’s look at an important indicator: pass rate.

The researchers defined the pass rate as the proportion of objects in the game that were judged to be human by the evaluators.

The pass rate of GPT-4 is 54%, which exceeds GPT-3.5 (50%) and ELIZA baseline (22%), but is lower than human subjects (67%).

Evaluator passing rate (left) and evaluator trust (right).Each point in the picture on the right represents a game

This result is actually not surprising and is basically consistent with the assumptions at the initial stage of the research.

Before the experiment, the researchers made several assumptions: First, it was assumed that humans' ability to detect GPT-4 would not exceed chance (50%).

To test this, the researchers fit a logistic regression model to predict intercept-only pass rates.

There is no significant difference between the intercept and 0 (z=0.14, p=0.482), which indicates that the pass rate of GPT-4 can be approximately equal to 50%. The Bayesian model predicts a 95% confidence interval for the GPT-4 intercept of -0.24-0.55 (equivalent to a pass rate of 44%-63%).

This means that when humans face GPT-4, the probability of successful discrimination is about the same as tossing a coin!

The results also confirmed the researchers' second and third hypotheses: the human pass rate was significantly higher than the GPT-4 pass rate (z=2.42, p=0.017), and the GPT-4 pass rate was higher than the ELIZA pass rate rate (z=4.11, p<0.001).

However, what is inconsistent with the researchers' experimental assumptions is that the performance of GPT-4 and GPT-3.5 is almost the same (z=0.497, p=0.619).

In the area of ​​”pretending to be human”, it's a tie.

Also unexpected was that participants who reported a greater understanding of how LLMs work were not more successful in identifying LLMs (z = -0.023, p = 0.869).

In an exploratory analysis, the researchers found that age had a negative impact on interrogator accuracy (z = -2.24, p = 0.025), but other demographic variables including gender, education level, and frequency of interaction with the chatbot There is no significant impact.

Why is this the result? Let us analyze the reasons behind it.

To gain insight into what exactly influences evaluators’ decisions, we categorized the strategies evaluators used and the reasons they provided for their decisions.

36% of evaluators engage in small talk with the person being evaluated, asking about personal details or daily activities.

The most commonly used strategies in the second and third categories were social and emotional (25%) – for example, asking about the assessee’s opinions and experiences and observing whether they have a sense of humor.

Others focus on knowledge and reasoning (13%), including questions about mathematics, logic and general knowledge.

Asking policy (left) and average accuracy (right)

To compare the effectiveness of each strategy, the researchers calculated the average accuracy using these strategies.

Evaluators are generally closer to the “correct answer” when they ask about human experience (75%), logic and mathematics (78%), or directly “accuse” the other person of artificial intelligence (80%).

In games, artificial intelligence often “shows off” because evaluators feel that they seem to be forcing themselves into roles, either too casually or because they lack personality.

ELIZA, on the other hand, is easily “exposed”, mainly due to unnatural reactions and lack of contextual awareness.

It can be seen that both in terms of the strategies used and the reasons for decision-making by the volunteers who participated in the game, they focused more on language style and socio-emotional factors rather than more traditional intellectual categories (knowledge and reasoning).

This seems to reflect an underlying assumption among the volunteers—that social intelligence is the most inimitable human characteristic for AI.

Turing test passed

How to tell when a system has passed the Turing test?

Turing predicted in his 1950 paper that by the end of the century, AI systems would be able to play the imitation game so well that examiners would have no more than a 70 percent chance of making a correct identification after a five-minute exchange.

This conclusion has led to whether the pass rate of AI can reach 30%, which has become the discussion goal of many researchers.

However, this benchmark seems arbitrary, and it's unclear whether Turing intended it as a definition of success.

In fact, the 50% baseline is more reasonable because it better proves that human reviewers are no better than chance at identifying AI.

This definition is particularly relevant to the goal of discovering whether users can reliably identify other humans during online interactions.

This issue was addressed by using the ELIZA baseline as a performance check in the investigators' pre-disclosure analysis.

Only when the analysis shows that the pass rate of ELIZA is lower than the probability and the pass rate of GPT-4 is not lower than the probability, the model can be judged to have passed the test.

According to this definition, GPT-4 clearly passed this version of the Turing test.

Furthermore, participants’ confidence scores and the basis for their decision-making indicate that they were not just random guesses: GPT-4 has an average human confidence level of 73% (see Figure 2).

Netizen discussion

Some netizens said that if we want to equate the Turing test with intelligence, the Turing test is not a good test. However, the fact that AI can almost always “fool” humans is a bit concerning.

Others said they were skeptical of the test. Because GPT-4 will outperform most humans, it will be easy to tell who is human and who is artificial intelligence.

Researchers said this is indeed a problem we have encountered. For example, GPT-4’s knowledge reserve is “too rich” or it masters too many languages. We explicitly prompt the model to avoid this situation, which is effective to a certain extent.

References:

  • https://x.com/camrobjones/status/1790766472458903926

  • https://x.com/emollick/status/1790877242525942156

Advertisement