How A Fake News Study Tested Ethical Research Boundaries – Analysis
By SwissInfo
A controversial fake news study, carried out by Swiss-based researchers on the social media platform Reddit, has highlighted the ethical responsibilities and challenges of conducting studies on society.
By Matthew Allen
The research team, which has been linked to the University of Zurich, covertly tested the ability of artificial intelligence (AI) to manipulate public opinion with misinformation on a subreddit group.
For several months, the researchers stretched the ethical boundaries of observing social behaviour beyond breaking point. They used Large Language Models (LLMs) to invent opinions on a variety of subjects – from owning dangerous dogs to rising housing costs, the Middle East and diversity initiatives.
The AI bots hid behind fictitious pseudonyms as they churned out debating points into the subreddit r/changemyview. Members of the group then argued for or against the AI-composed opinions, unaware they were part of a research project until the researchers came clean at its completion.
The revelation provoked a storm of criticism within Reddit, the research community and the international media.
At first, the researchers, who will not reveal their identities for fear of reprisals, defended their actions, because the “high societal importance of this topic” made it “crucial to conduct a study of this kind, even if it meant disobeying the rules” of the channel, which included a ban on AI bots.
They later issued a “full and deeply felt apology” as “the reactions of the community of disappointment and frustration have made us regret the discomfort that the study may have caused.”
‘Bad science is bad ethics’
“The issue here is not only about conducting research involving deception,” said Professor Dominique Sprumont, president of the Vaud Cantonal Research Ethics Commission in Switzerland. “It is about willfully breaking the rules of a community of human beings who build trust in their network based on those rules.”
“Furthermore, the scientific quality of the project is more than dubious. Bad science is bad ethics.”
The Swiss team ran into a problem familiar to many researchers: how much information to withhold from participants to make the study realistic.
Previous fake news research has faced the same conundrum, according to Gillian Murphy and Ciara M Greene from the University College, Cork, and University College Dublin, Ireland, who have conducted their own misinformation research and examined the results of other studies.
Researchers sometimes disguise the exact purpose of the study at the outset, and only inform participants once it has been completed, they wrote in an article, published in the journal sciencedirect in 2023. For example, by stating at the outset that the research is about news consumption in general, rather than specifically on fake news.
Limits to deception
“For some misinformation research, it would be impossible to study how participants naturally respond to misinformation without employing this kind of deception, as participants’ suspicions, motivations and behaviours may change when they know the information they will be shown might be misleading,” wrote the authors.
But the authors also note that there are limits to deception. Researchers have a moral duty to respect the human rights and privacy of test participants, inform them at the outset that they are taking part in research, gain explicit consent for using their data and to take steps to avoid inflicting damage on people.
In this video, we explain some of the ways researchers can approach such research ethically.
In 2014, Facebook was criticised by academics, lawyers and politicians for covertly manipulating thousands of newsfeeds to test how users’ moods were impacted by negative or positive posts submitted by their friends.
The social media platform said the experiment was important to test the emotional impact of its newsfeed service on users but later admitted it had gone about the study in the wrong way.
Responsibility unclear
The Swiss fake news study on Reddit has likewise been condemned for failing to inform people in advance that they were participating in a research project.
It has also stirred up confusion as to who is responsible. It was conceived by a researcher employed at the University of Zurich and presented to its Ethics Committee of the Faculty of Arts and Social Sciences in April of last year as one of four tests – and the only one of the four that involved AI bots.
At the time, the ethics body red-flagged the Reddit study as “exceptionally challenging”, according to the university. It recommended that researchers should inform participants “as much as possible” and fully comply with the rules of Reddit.
But the lead researcher left the university in September and only started the study after leaving, the university says, adding that responsibility for the project and publication therefore lies with the researchers and not the university.
“There were no [Zurich university] researchers or students engaged in the Reddit project at the time it was carried out.”
The research team’s preliminary findings were published at first but were later taken offline.
Stricter review process
Zurich Data Protection Commissioner Dominka Blonski has not yet started a formal probe of the matter, but her office is aware of the controversy. “We do not know whether the research was conducted by the University of Zurich or its faculty, or by individual researchers on their own initiative,” she told SWI swissinfo.ch.
Blonski must first identify whether to investigate the university or individuals. But she is concerned at evidence in media reports that point to potential violations of data protection laws, particularly related to the apparent profiling of some Reddit users.
The university must also contend with unspecified “formal legal demands” from Reddit and is investigating the incident internally.
“In light of these events, the Ethics Committee of the Faculty of Arts and Social Sciences intends to adopt a stricter review process in the future and, in particular, to coordinate with the communities on the platforms prior to experimental studies,” said a spokeswoman.