The increased application of AI-generated scientific and science-related texts, particularly social media, is the source of concern: they can include fake or highly persuasive information, which cannot be easily detected by the users, and can influence the way people think and make decisions.
Various jurisdictions and platforms are heading in the direction of explicitly disclosing AI-generated or AI-synthesised content to safeguard the population. Nevertheless, according to a recent study published in Jacom there is a risk that such labels can backfire, reducing the effectiveness of legitimate scientific knowledge and boosting alleged knowledge.
The Dangers of AI-Scientific Content.
AI content can be deceptive at least on two grounds. To start with, language models can hallucinate and make statements that are valid, but are factually incorrect. Second, the users can intentionally request AI systems to produce fake and plausible messages. Due to this reason, various nations have come up with transparency requirements whereby online content created or synthesized by AI should be clearly labeled.
Teng Lin, a PhD student at the School of Journalism and Communication, University of Chinese Academy of Social Sciences (UCASS), Beijing, and Yiqing Zhang, a Master student at the same school, in their new study tested whether these disclosure labels do what they claim they do; that is, protect the public against misinformation.
Experimental Study
According to Teng, they concentrated on science-related news posted on the social media.
The experimental research was conducted on 433 participants who were online recruited via the Credamo site in the month of March to May 2024. The authors developed four categories of social media posts, including correct information with or without an AI label, and misinformation with or without an AI label. The researchers used GPT-4 to adapt the texts based on the items published by the Science Rumour Debunking Platform in China to produce the correct and deceptive versions of the text in Weibo and were subsequently vetted by the researchers themselves. The participants were requested to provide a rating on the perceived credibility of each of the posts on the basis of 1 to 5. The negative attitudes of the participants toward AI and the level of engagement with this subject were also measured by the researchers.

A Paradoxical Effect
The findings showed an anti-intuitive trend. Teng says that its most significant result is what he refers to as a truth-falsity crossover effect. The same AI label creates two ways and two directions of credibility across messages as to whether the message is true or false where it lowers credibility of true messages and raises credibility of false messages. He further notes that it does not necessarily imply that the effect would be the same on all platforms or formats but in their experimentation the trend was evident.
In this regard, AI disclosure fails to assist individuals in selecting real and fake information. Rather, it seems to redistribute credibility in a counter-intuitive fashion.
Teng and Zhang also discovered that the personal attitudes towards AI are involved. The people with more negative attitudes to AI punished the correct information even more punishments when it was referred to as AI-generated. Nevertheless the credibility enhancement that was seen on misinformation did not entirely vanish in the negative attitudes, rather it was simply attenuated and was attenuated in topics specific manner, not being removed in general.
It implies that so-called algorithm aversion does not contribute to the homogeneous rejection of AI-generated content, but rather causes an even more sophisticated and asymmetrical response.
The necessity of a careful policy formulation.
Such studies emphasise the importance of thorough-testing the regulatory interventions before they are implemented because well-meaning transparency initiatives can have unintended effects.
Teng says, “We provide some recommendations in our paper but they have to be confirmed in order to be accepted as valid.” One of the suggestions is to use a dual-labeling protocol. Rather than just writing that the material is the result of the work of AI, a label might also contain a disclaimer, that the information has not been evaluated separately, or place a warning of a risk. In brief, it might not be enough to tell audiences that a text has been created by AI.
Another suggestion, Teng makes, is the use of graded or categorical system of labeling. Various forms of scientific information have varying risks. As an example, a warning can be more intense with medical or health-related information and less serious with information about new technologies. “Accordingly, we would propose various degrees of disclosure, based on the nature and the risk of the content.”
