Stanford and Google DeepMind researchers have created AI that may replicate human personalities with uncanny accuracy after only a two-hour dialog.
By interviewing 1,052 folks from various backgrounds, they constructed what they name “simulation brokers” – digital copies that have been spookily efficient at predicting their human counterparts’ beliefs, attitudes, and behaviors.
To create the digital copies, the crew makes use of information from an “AI interviewer” designed to have interaction contributors in pure dialog.
The AI interviewer asks questions and generates personalised follow-up questions – a median of 82 per session – exploring every part from childhood reminiscences to political opinions.
Via these two-hour discussions, every participant generated detailed transcripts averaging 6,500 phrases.

For instance, when a participant mentions their childhood hometown, the AI would possibly probe deeper, asking about particular reminiscences or experiences. By simulating a pure movement of dialog, the system captures nuanced private info that customary surveys are inclined to skim over.
Behind the scenes, the research paperwork what the researchers name “professional reflection” – prompting massive language fashions (LLMs) to investigate every dialog from 4 distinct skilled viewpoints:
- As a psychologist, it identifies particular character traits and emotional patterns – for example, noting how somebody values independence based mostly on their descriptions of household relationships.
- Via a behavioral economist’s lens, it extracts insights about monetary decision-making and danger tolerance, like how they strategy financial savings or profession decisions.
- The political scientist perspective maps ideological leanings and coverage preferences throughout numerous points.
- A demographic evaluation captures socioeconomic components and life circumstances.
The researchers concluded that this interview-based approach outperformed comparable strategies – corresponding to mining social media information – by a considerable margin.

Testing the digital copies
So how good have been the AI copies? The researchers put them by means of a battery of exams to seek out out.
First, they used the Common Social Survey – a measure of social attitudes that asks questions on every part from political opinions to non secular beliefs. Right here, the AI copies matched their human counterparts’ responses 85% of the time.
On the Huge 5 character check, which measures traits like openness and conscientiousness by means of 44 completely different questions, the AI predictions aligned with human responses about 80% of the time. The system was excellent at capturing traits like extraversion and neuroticism.
Financial recreation testing revealed fascinating limitations, nevertheless. Within the “Dictator Recreation,” the place contributors resolve the way to break up cash with others, the AI struggled to completely predict human generosity.
Within the “Belief Recreation,” which exams willingness to cooperate with others for mutual profit, the digital copies solely matched human decisions about two-thirds of the time.
This means that whereas AI can grasp our said values, it nonetheless can’t absolutely seize the nuances of human social decision-making (but, after all).
Actual-world experiments
Not stopping there, the researchers additionally topic the copies to 5 traditional social psychology experiments.
In a single experiment testing how perceived intent impacts blame, each people and their AI copies confirmed related patterns of assigning extra blame when dangerous actions appeared intentional.
One other experiment examined how equity influences emotional responses, with AI copies precisely predicting human reactions to truthful versus unfair therapy.
The AI replicas efficiently reproduced human habits in 4 out of 5 experiments, suggesting they will mannequin not simply particular person topical responses however broad, advanced behavioral patterns.
Simple AI clones: What are the implications?
AI methods that ‘clone’ human views and behaviors are massive enterprise, with Meta not too long ago asserting plans to fill Fb and Instagram with AI profiles that may create content material and have interaction with customers.
TikTok has additionally jumped into the fray with its new “Symphony” suite of AI-powered inventive instruments, which incorporates digital avatars that can be utilized by manufacturers and creators to supply localized content material at scale.
With Symphony Digital Avatars, TikTok permits eligible creators to construct avatars that characterize actual folks, full with a variety of gestures, expressions, ages, nationalities and languages.
Stanford and DeepMind’s analysis suggests such digital replicas will turn out to be much more subtle – and simpler to construct and deploy at scale.
“For those who can have a bunch of small ‘yous’ working round and truly making the selections that you’d have made — that, I feel, is in the end the long run,” lead researcher Joon Sung Park, a Stanford PhD scholar in laptop science, describes to MIT.
Park describes that there are upsides to such expertise, as constructing correct clones may assist scientific analysis.
As an alternative of working costly or ethically questionable experiments on actual folks, researchers may check how populations would possibly reply to sure inputs. For instance, it may assist predict reactions to public well being messages or research how communities adapt to main societal shifts.
Finally, although, the identical options that make these AI replicas worthwhile for analysis additionally make them highly effective instruments for deception.
As digital copies turn out to be extra convincing, distinguishing genuine human interplay from AI has turn out to be powerful, as we’ve noticed from an onslaught of deep fakes.
What if such expertise was used to clone somebody towards their will? What are the implications of making digital copies which might be intently modeled on actual folks?
The Stanford and DeepMind analysis crew acknowledges these dangers. Their framework requires clear consent from contributors and permits them to withdraw their information, treating character replication with the identical privateness considerations as delicate medical info.
That at the least offers some theoretical safety towards extra malicious types of misuse. However, in any case, we’re pushing deeper into the uncharted territories of human-machine interplay, and the long-term implications stay largely unknown.