TikTok’s mental health ‘minefield’ may be unreliable or misleading: Study

Social media platforms have become a go-to source for mental health information, especially among young users. But a new study suggests that much of what people encounter online, particularly on TikTok, may be unreliable or even misleading.

Researchers from the University of East Anglia (UEA) found that a significant share of posts related to conditions such as ADHD and autism contain inaccuracies or lack proper evidence. Their analysis, which covered multiple platforms including YouTube, Facebook, Instagram and X, points to a broader issue: misinformation around mental health is widespread, and in some cases, alarmingly high.

The study reviewed more than 5,000 posts across a range of mental health topics, from anxiety and depression to schizophrenia and eating disorders. It found that misleading content could make up as much as 56 per cent of posts in certain areas, highlighting how easily unverified claims can spread in highly engaging formats like short videos.

Among all platforms, TikTok stood out for having the highest levels of questionable content. According to the researchers, over half of the ADHD-related videos analysed, and around 52 per cent were found to be inaccurate. For autism-related content, the figure stood at 41 per cent. By comparison, misinformation rates were lower on YouTube, averaging about 22 per cent, and even lower on Facebook at under 15 per cent.

Experts say this matters because social media is increasingly shaping how young people understand their mental health. Many turn to these platforms to interpret symptoms or self-diagnose conditions. While this can sometimes prompt useful self-reflection, it also carries risks when the information is incomplete or incorrect.

Misleading content can blur the line between normal behaviour and clinical conditions, potentially leading people to wrongly believe they have a disorder, or, conversely, delay seeking help when they actually need it. It may also reinforce stigma, create unnecessary fear, or promote treatments that lack scientific backing.

The study also highlights a stark divide between who is creating content and how reliable it is. Posts made by healthcare professionals were consistently more accurate, but they represent only a small fraction of what users see. For instance, just 3 per cent of ADHD-related videos by professionals contained misinformation, compared to 55 per cent among non-professional creators.

wikipedia.org/

At the same time, the researchers acknowledge that personal stories and lived experiences shared by individuals can play a valuable role in raising awareness and helping others feel understood. The challenge lies in ensuring that such content is complemented by clear, evidence-based guidance from qualified experts.

Another key factor driving misinformation is the way platforms like TikTok operate. Their algorithms tend to prioritise content that is engaging and widely shared, regardless of its accuracy. Once users show interest in a topic, they are often fed a steady stream of similar videos, creating echo chambers where misleading ideas can quickly gain traction.

There are, however, some exceptions. The study found that YouTube Kids performed notably better, with no misinformation detected in content related to anxiety and depression, and relatively low levels, and around 8.9 per cent for ADHD. Researchers attribute this to stricter moderation and content controls.

Overall, the findings point to a growing need for stronger safeguards. The authors call for better moderation systems, clearer standards for identifying misinformation, and more active participation from clinicians and health organisations in creating accessible, trustworthy content.

As social media continues to shape public understanding of mental health, the study makes one thing clear: while these platforms can be powerful tools for awareness, without reliable information, they can just as easily become a source of confusion.

Also Read:

Social media culture can encourage risky and inappropriate posting behavior

Mass media linked to childhood obesity

Ministry of I&B blocks 45 YouTube videos from 10 YouTube channels

Under IT Rules, 2021, 45 YouTube videos from 10 YouTube channels were blocked by Ministry of Information & Broadcasting.

  • Videos containing hateful speech against religious communities and spreading communal disharmony blocked.
  • Morphed images and videos being used to harm India’s national security, foreign relations and public order.

Based on the inputs from intelligence agencies, the Ministry of Information & Broadcasting has directed YouTube to block 45 YouTube videos from 10 YouTube channels. Orders to block the concerned videos were issued on 23.09.2022 under the provisions of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021. The blocked videos had cumulative viewership of over 1 crore 30 lakh views.

The content included fake news and morphed videos spread with the intent to spread hatred among religious communities. Examples include false claims such as the Government to have taken away the religious rights of certain communities, violent threats against religious communities, declaration of civil war in India, etc. Such videos were found to have the potential to cause communal disharmony and disrupt public order in the country.

Some of the videos blocked by the Ministry were being used to spread disinformation on issues related to Agnipath scheme, Indian Armed Forces, India’s national security apparatus, Kashmir, etc. The content was observed to be false and sensitive from the perspective of national security and India’s friendly relations with foreign States.

Certain videos depicted erroneous external boundary of India with parts of J&K and Ladakh outside the Indian territory. Such cartographic misrepresentation was found to be detrimental to the sovereignty and territorial integrity of India.

The content blocked by the Ministry was found to be detrimental to sovereignty and integrity of India, security of the State, India’s friendly relations with foreign States, and public order in the country. Accordingly, the content was covered within the ambit of section 69A of the Information Technology Act, 2000.

The Government of India remains committed to thwart any attempts at undermining India’s sovereignty and integrity, national security, foreign relations, and public order.

 

 

Social media culture can encourage risky and inappropriate posting behavior

The use of social media is pervasive among young adults, but not all posted content is appropriate.

Now a new study by the University of Plymouth investigates why young adults might post content on social media that contains sexual or offensive material.

Led by Dr Claire White from the University’s School of Psychology, the study suggests that such risky social media posts are not just due to impulsivity, but might be a deliberate strategy to fit in with the wider social media culture that makes people believe ‘it’s the right thing to do’.

Existing studies show that impulsiveness is predictive of online risk taking behaviours, but this additional research with British and Italian young adults highlighted that high self-monitoring – or adapting behaviour in line with perceived social norms – was equally predictive of posting risky content, which Dr White says could mean young people think it’s the best way to behave.

To measure risky online self-presentation the research team, which also included PhD student Clara Cutello, Dr Michaela Gummerum and Professor Yaniv Hanoch from the School of Psychology, designed a risk exposure scale relating to potentially inappropriate images or texts, such as drug and alcohol use, sexual content, personal information, and offensive material. They also evaluated people’s level of self-monitoring and impulsivity.

Dr White said: “It’s counterintuitive really because it would be easy to assume that a high self-monitor would question their actions and adapt accordingly.

“But the results show that high self-monitors are just as likely to post risky content as those in the study who are more impulsive, which suggests they think it’s not only OK to be risky – and potentially offensive – but that it’s actually the right thing to do.

“The only notable difference between the nationalities was that British students were more likely to post comments and images related to their alcohol and drug use on social media, whereas their Italian counterparts were more likely to post offensive content and personal information.

“This difference shows that culture as a whole seems to play a part in what type of content is shared.

“But the fact that the behaviours predicting risky online choices are the same for both nationalities suggests there’s a wider social media culture that encourages this type of risk-taking behaviour.”