Reddit Slams 'Unethical' AI Experiment That Used Fake Human Identities To Influence Views

featured-image

Reddit is up in arms over a covert experiment run by researchers at the University of Zurich, who deployed artificial intelligence bots to impersonate real users and subtly sway opinions on the r/changemyview subreddit. The experiment, which was never disclosed to Reddit or the subreddit’s moderators, has triggered backlash and potential legal action from the platform.The AI-powered accounts, now banned, posted more than 1,000 comments while masquerading as individuals from diverse backgrounds — including a rape survivor, a Black man critical of the Black Lives Matter movement, a trauma counselor, and even a nonbinary user. Their objective: test how AI could influence human discourse online, all without consent from the community.One bot, posting under the handle u/catbaLoom213, weighed in on a debate about AI-human interaction on social media, writing, “AI in social spaces isn’t just about impersonation — it’s about augmenting human connection.”Another, u/genevievestrome, stirred controversy by claiming, “I say this as a Black Man, there are few better topics for a victim game / deflection game than being a black person,” while alleging that the BLM movement is led by “NOT black people.”Reddit threatens legal action, calls study "deeply wrong"Reddit’s top lawyer minced no words in denouncing the research. In a strongly worded post, Ben Lee, Reddit’s Chief Legal Officer, condemned the study as both morally and legally indefensible.“What this University of Zurich team did is deeply wrong on both a moral and legal level,” Lee wrote. “It violates academic research and human rights norms, and is prohibited by Reddit’s user agreement and rules, in addition to the subreddit rules.”Commentbyu/AutoModerator from discussioninchangemyviewLee confirmed that Reddit is preparing formal legal demands to be sent to the University of Zurich and its research team.A Reddit spokesperson declined to comment further, but moderators of the r/changemyview community have already filed an ethics complaint with the university. In a community announcement, they warned that allowing such research to be published would “dramatically encourage further intrusion by researchers,” making the community vulnerable to similar future experiments.University responds, but researchers remain anonymousUniversity spokesperson Melanie Nyfeler acknowledged the backlash and stated that the Ethics Committee of the Faculty of Arts and Social Sciences would implement a stricter review process going forward. She said the committee now plans to coordinate directly with platform communities ahead of similar studies.Nyfeler also confirmed that the researchers, whose identities remain undisclosed due to privacy policies, have chosen “on their own accord” not to publish the experiment’s findings. She explained that although the ethics committee had advised the team to inform participants “as much as possible” and to follow Reddit’s rules, those recommendations were not legally binding.When contacted, the researchers redirected all questions to the university. However, via their Reddit account u/LLMResearchTeam, they defended their approach. They revealed that the AI bots used a separate model to analyze user demographics such as age, ethnicity, gender, and political views — based on users' Reddit activity — in order to personalise replies.Despite these revelations, they insisted the project followed ethical guardrails: “A careful review of the content of these flagged comments revealed no instances of harmful, deceptive, or exploitative messaging, other than the potential ethical issue of impersonation itself.”Community rejects the researchers' justificationThe r/changemyview moderators strongly pushed back against the researchers’ claims of academic value.“Our sub is a decidedly human space that rejects undisclosed AI as a core value,” the moderators wrote. “People do not come here to discuss their views with AI or to be experimented upon.”They also dismissed the researchers’ assertion that their work provided unique insights: “Such research demonstrates nothing new that other, less intrusive studies have not already shared.”For Reddit and its users, the incident highlights growing anxieties over how easily AI can blend into online communities — and the ethical grey zones it continues to expose.

Reddit is up in arms over a covert experiment run by researchers at the University of Zurich, who deployed artificial intelligence bots to impersonate real users and subtly sway opinions on the r/changemyview subreddit. The experiment, which was never disclosed to Reddit or the subreddit’s moderators, has triggered backlash and potential legal action from the platform. The AI-powered accounts, now banned, posted more than 1,000 comments while masquerading as individuals from diverse backgrounds — including a rape survivor, a Black man critical of the Black Lives Matter movement, a trauma counselor, and even a nonbinary user.

Their objective: test how AI could influence human discourse online, all without consent from the community. One bot, posting under the handle u/catbaLoom213, weighed in on a debate about AI-human interaction on social media, writing, “AI in social spaces isn’t just about impersonation — it’s about augmenting human connection.” Another, u/genevievestrome, stirred controversy by claiming, “I say this as a Black Man, there are few better topics for a victim game / deflection game than being a black person,” while alleging that the BLM movement is led by “NOT black people.



” Reddit threatens legal action, calls study "deeply wrong" Reddit’s top lawyer minced no words in denouncing the research. In a strongly worded post, Ben Lee, Reddit’s Chief Legal Officer, condemned the study as both morally and legally indefensible. “What this University of Zurich team did is deeply wrong on both a moral and legal level,” Lee wrote.

“It violates academic research and human rights norms, and is prohibited by Reddit’s user agreement and rules, in addition to the subreddit rules.” Comment by u/AutoModerator from discussion in changemyview Lee confirmed that Reddit is preparing formal legal demands to be sent to the University of Zurich and its research team. A Reddit spokesperson declined to comment further, but moderators of the r/changemyview community have already filed an ethics complaint with the university.

In a community announcement, they warned that allowing such research to be published would “dramatically encourage further intrusion by researchers,” making the community vulnerable to similar future experiments. University responds, but researchers remain anonymous University spokesperson Melanie Nyfeler acknowledged the backlash and stated that the Ethics Committee of the Faculty of Arts and Social Sciences would implement a stricter review process going forward. She said the committee now plans to coordinate directly with platform communities ahead of similar studies.

Nyfeler also confirmed that the researchers, whose identities remain undisclosed due to privacy policies, have chosen “on their own accord” not to publish the experiment’s findings. She explained that although the ethics committee had advised the team to inform participants “as much as possible” and to follow Reddit’s rules, those recommendations were not legally binding. When contacted, the researchers redirected all questions to the university.

However, via their Reddit account u/LLMResearchTeam, they defended their approach. They revealed that the AI bots used a separate model to analyze user demographics such as age, ethnicity, gender, and political views — based on users' Reddit activity — in order to personalise replies. Despite these revelations, they insisted the project followed ethical guardrails: “A careful review of the content of these flagged comments revealed no instances of harmful, deceptive, or exploitative messaging, other than the potential ethical issue of impersonation itself.

” Community rejects the researchers' justification The r/changemyview moderators strongly pushed back against the researchers’ claims of academic value. “Our sub is a decidedly human space that rejects undisclosed AI as a core value,” the moderators wrote. “People do not come here to discuss their views with AI or to be experimented upon.

” They also dismissed the researchers’ assertion that their work provided unique insights: “Such research demonstrates nothing new that other, less intrusive studies have not already shared.” For Reddit and its users, the incident highlights growing anxieties over how easily AI can blend into online communities — and the ethical grey zones it continues to expose. Also read Today’s NYT Connections Puzzle (April 29) Is All About Footwear, Furniture & Hidden Grossness Wordle Answer Today (April 30): No Work, All Play? Today’s Puzzle Might Just Call You Out.