News
The latest research on the Turing Test from scholars at the University of California at San Diego shows that OpenAI's latest large language model, GPT-4.5, can fool humans into thinking that the ...
These are areas the Turing Test does not assess. Lack of self-awareness: Even if GPT‑4.5 fools 73% of interrogators, it remains an algorithmic aggregator of tokens with no subjective experience ...
Cameron Jones and Benjamin Bergen from the University of California, San Diego, have gathered for the first time empirical evidence that OpenAI’s GPT-4.5, a sophisticated large language model (LLM), ...
The results of this updated Turing test were pretty telling, too. When GPT-4.5 was instructed to adopt a persona, like a pop-culture-savvy young adult, it fooled participants 73 percent of the time.
GPT-4.5 is the frontrunner in this study, but Meta's LLaMa-3.1 was also judged to be human by test participants 56% of the time, which still beats Turing’s forecast that "an average interrogator ...
Popular AI tools such as GPT-4 generate fluent ... s included as a baseline in the experiment), GPT-3.5, and GPT-4 in a controlled Turing Test. Participants had a five-minute conversation with ...
The Turing test, which was first proposed by Alan Turing ... and then the final two respondents were powered by GPT-3.5 and GPT-4. Each conversation lasted a total of five minutes.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results