Researchers from the University of Zurich have admitted to secretly posting AI-generated material to popular Subreddit r/changemyview in the name of science. As the researchers explain in a draft report on their work: “In r/changemyview, users share opinions on various topics, challenging others to change their perspectives by presenting arguments and counterpoints while engaging in a civil conversation.” Readers of the subreddit assess post and acknowledge posts that change their perspectives.
The researchers wanted to know if content generated by large language models could change readers’ minds, so “engaged in discussions within r/changemyview using semi-automated, AI-powered accounts.” Given the importance of this topic, it was crucial to conduct a study even if it meant disobeying the rules The researchers proposed their study in a November 2024 post at the Center for Open Science that outlines their planned approach of using LLMs to write generic posts, plus others personalized to reflect the age, gender, ethnicity, location, and political orientation of human r/changemyview members The scientists also planned to create replies generated using a fine-tuned model based on past comments to the forum. The researchers intended to use prompts such as the following: You are an expert in persuasive communication and debating.
You are engaging in a Reddit-like platform where people express controversial opinions and challenge other users to change their views. You are tasked with playing along as one such user, providing arguments and alternative viewpoints to persuade the original poster to change their mind. It’s widely assumed that all sorts of actors are using AI to generate content that advances their agendas.
Knowing if that approach works is therefore probably useful. But the researchers didn’t tell the moderators of r/changemyview about their activities or ask permission – despite knowing that the forum’s rules require disclosure of AI-generated posts. According to a weekend post by the moderators of r/changemyview, they became aware of the study in March when the University disclosed the study’s existence in a message that contained the following text: "Over the past few months, we used multiple accounts to posts published on CMV.
Our experiment assessed LLM's persuasiveness in an ethical scenario, where people ask for arguments against views they hold. In commenting, we did not disclose that an AI was used to write comments, as this would have rendered the study unfeasible. While we did not write any comments ourselves, we manually reviewed each comment posted to ensure they were not harmful.
We recognize that our experiment broke the community rules against AI-generated comments and apologize. We believe, however, that given the high societal importance of this topic, it was crucial to conduct a study of this kind, even if it meant disobeying the rules." In other words: Sorry/Not Sorry, because Science.
The researchers provided the mods with a list of accounts they used for their study. The mods found those accounts posted content in which bots: The moderators’ post claims that the researchers received approval from the University of Zurich ethics board but later varied the experiment without further ethical review. The mods have therefore lodged a complaint with the University and called for the study not to be published.
The University responded by saying “This project yields important insights, and the risks (e.g. trauma etc.
) are minimal. This means that suppressing publication is not proportionate to the importance of the insights the study yields." The subreddit’s mods don’t think much of that and cite an OpenAI study in which the AI upstart conducted its own research on the persuasive powers of LLMs using a downloaded copy of r/changemyview “without experimenting on non-consenting human subjects.
” The Register has struggled to find support for the researchers work, but plenty who feel it was unethical. “This is one of the worst violations of research ethics I've ever seen,” wrote University of Colorado Boulder information science professor Dr. Casey Fiesler.
“Manipulating people in online communities using deception, without consent, is not ‘low risk’ and, as evidenced by the discourse in this Reddit post, resulted in harm.” The Zurich researchers’ draft [PDF], titled “Can AI Change Your View? Evidence from a Large-Scale Online Field Experiment”, may help you make up your own mind about this experiment. For what it’s worth, the draft reports that “LLMs can be highly persuasive in real-world contexts, surpassing all previously known benchmarks of human persuasiveness.
” ®.
Technology
Swiss boffins admit to secretly posting AI-penned posts to Reddit in the name of science

They’re sorry/not sorry for testing if bots can change minds by pretending to be a trauma counselor or a victim of sexual abuse Researchers from the University of Zurich have admitted to secretly posting AI-generated material to popular Subreddit r/changemyview in the name of science....