In a quiet laboratory at the University of Amsterdam, two researchers constructed something unusual: a social network inhabited entirely by artificial intelligence. 500 generative agents, each imbued with a unique, data-driven persona, were set loose in this artificial world. The aim was not entertainment, but to observe the raw mechanics of digital society.
When Bots Behave Like Humans: AI Simulation Sheds Light on Social Media Behaviors
Building the AI Agents and the Simulated Social Network
Each agent was meticulously built from real-world demographic data to reflect diverse political affiliations, religious beliefs, ages, incomes, and interests. The agents were not simple bots. They could read headlines, follow other users, repost content, and compose messages. Their behavior would be shaped by an interplay of identity, exposure, and network connections.
Several AI models were used. The first agents were powered by GPT-4o-mini. This was chosen for its balance of reasoning capability and efficiency. Results were replicated using other models like Llama-3.2-8B and DeepSeek-R1 to ensure robustness. All models produced similar patterns. This strengthens the credibility of the findings and conclusions of the study.
The platform itself was deliberately minimal. There were no ads, flashy notifications, or complex recommendation algorithms. Posts could spread only through simple exposure in timelines or occasional highlighting of popular items. Researchers wanted to see if the most infamous problems of social media would still appear without powerful engagement-driven systems.
Observed Behaviors of the Artificial Digital Inhabitants
It did not take long for them to emerge. The network split into well-defined echo chambers, with agents almost exclusively following others who shared their political beliefs. A handful of users gained an overwhelming share of attention, while more extreme voices consistently attracted larger audiences. Familiar patterns of online polarization unfolded in miniature.
The researchers introduced 6 interventions to test whether these dynamics could be changed. They tried chronological feeds, demoting popular voices, boosting opposing viewpoints, prioritizing empathetic posts, hiding follower counts, and removing biographies. Each change had results. Most had trade-offs that improved one measure while worsening another.
A chronological feed reduced inequality by flattening the influence of dominant voices, but it made the link between extremity and influence stronger. Promoting empathetic posts modestly improved cross-partisan ties but deepened attention inequality. Simply exposing users to opposing views had little measurable effect on their following or reposting behavior within the simulation.
The most striking conclusion was that the pathologies in social media can arise without the powerful algorithms so often blamed. In this artificial world, mere feedback between reposts, follower growth, and identity cues was enough to produce echo chambers, elite dominance, and the amplification of extremes. The problems appear rooted in the structure itself.
Important Takeaways and Implications For Further Study
Researchers caution that their AI agents and social network simulation are not a perfect mirror of human behavior. The agents are limited by the biases and quirks of the particular language models that power them. Yet the patterns that emerged suggest that solving problems of social media may require redesigning its foundations rather than adjusting superficial features.
The work is one of the first to use AI to advance social science theory. This approach opens new possibilities for exploring complex societal dynamics in controlled, repeatable environments. Future research could refine agent design, incorporate richer human behavior modeling, and test deeper structural changes to online platforms beyond surface-level interventions.
Further details of the research are discussed in a preprint article published on 5 August 2025 on the preprint and postprint platform arXiv. Petter Törnberg, an Assistant Professor in computational social science at the University of Amsterdam, headed the undertaking, with Maik Larooij, a research engineer at the same university, as the listed first author of the paper.
FURTHER READING AND REFERENCE
- Larroij, M. and Törnberg, P. 2025. “Can We Fix Social Media? Testing Prosocial Interventions using Generative Social Simulation.” arXiv. Available online