TikTok’s mental health ‘minefield’ may be unreliable or misleading: Study

Social media platforms have become a go-to source for mental health information, especially among young users. But a new study suggests that much of what people encounter online, particularly on TikTok, may be unreliable or even misleading.

Researchers from the University of East Anglia (UEA) found that a significant share of posts related to conditions such as ADHD and autism contain inaccuracies or lack proper evidence. Their analysis, which covered multiple platforms including YouTube, Facebook, Instagram and X, points to a broader issue: misinformation around mental health is widespread, and in some cases, alarmingly high.

The study reviewed more than 5,000 posts across a range of mental health topics, from anxiety and depression to schizophrenia and eating disorders. It found that misleading content could make up as much as 56 per cent of posts in certain areas, highlighting how easily unverified claims can spread in highly engaging formats like short videos.

Among all platforms, TikTok stood out for having the highest levels of questionable content. According to the researchers, over half of the ADHD-related videos analysed, and around 52 per cent were found to be inaccurate. For autism-related content, the figure stood at 41 per cent. By comparison, misinformation rates were lower on YouTube, averaging about 22 per cent, and even lower on Facebook at under 15 per cent.

Experts say this matters because social media is increasingly shaping how young people understand their mental health. Many turn to these platforms to interpret symptoms or self-diagnose conditions. While this can sometimes prompt useful self-reflection, it also carries risks when the information is incomplete or incorrect.

Misleading content can blur the line between normal behaviour and clinical conditions, potentially leading people to wrongly believe they have a disorder, or, conversely, delay seeking help when they actually need it. It may also reinforce stigma, create unnecessary fear, or promote treatments that lack scientific backing.

The study also highlights a stark divide between who is creating content and how reliable it is. Posts made by healthcare professionals were consistently more accurate, but they represent only a small fraction of what users see. For instance, just 3 per cent of ADHD-related videos by professionals contained misinformation, compared to 55 per cent among non-professional creators.

wikipedia.org/

At the same time, the researchers acknowledge that personal stories and lived experiences shared by individuals can play a valuable role in raising awareness and helping others feel understood. The challenge lies in ensuring that such content is complemented by clear, evidence-based guidance from qualified experts.

Another key factor driving misinformation is the way platforms like TikTok operate. Their algorithms tend to prioritise content that is engaging and widely shared, regardless of its accuracy. Once users show interest in a topic, they are often fed a steady stream of similar videos, creating echo chambers where misleading ideas can quickly gain traction.

There are, however, some exceptions. The study found that YouTube Kids performed notably better, with no misinformation detected in content related to anxiety and depression, and relatively low levels, and around 8.9 per cent for ADHD. Researchers attribute this to stricter moderation and content controls.

Overall, the findings point to a growing need for stronger safeguards. The authors call for better moderation systems, clearer standards for identifying misinformation, and more active participation from clinicians and health organisations in creating accessible, trustworthy content.

As social media continues to shape public understanding of mental health, the study makes one thing clear: while these platforms can be powerful tools for awareness, without reliable information, they can just as easily become a source of confusion.

Also Read:

Social media culture can encourage risky and inappropriate posting behavior

Mass media linked to childhood obesity

Meta’s Answer to AI Media Startups: ‘Movie Gen’ Ready to Disrupt Film Making Now

Meta has taken a major leap in artificial intelligence by announcing its latest AI model, Movie Gen. This cutting-edge tool is designed to generate realistic video and audio clips based on user prompts, putting it in direct competition with leading media generation platforms like OpenAI and ElevenLabs.

Movie Gen’s features go beyond just video creation. The AI can also produce background music and sound effects that synchronize with the video’s content. For example, in a demo, it added pom-poms to a man running solo in a desert scene. In another clip, it transformed a dry parking lot into a splashing puddle, enhancing footage of a skateboarder.

Meta’s new tool allows videos to run up to 16 seconds long, with audio extending to 45 seconds. The company claims Movie Gen holds its own against rivals like OpenAI, ElevenLabs, Runway, and Kling, all of which are pushing the boundaries of AI-generated media.

Meta Eyes Hollywood with Movie Gen

The introduction of Movie Gen comes as Hollywood explores the potential of generative AI in video production. Earlier this year, OpenAI, backed by Microsoft, introduced Sora, an AI that can generate movie-like clips from text descriptions, sparking excitement in the entertainment sector. However, concerns about AI systems trained on copyrighted material without permission have also been raised.

There’s growing anxiety about the misuse of AI-generated videos, especially deepfakes, in political campaigns. Incidents of such misuse have been reported in the U.S., India, Pakistan, and Indonesia, drawing attention from lawmakers worldwide.

Despite its powerful capabilities, Meta is unlikely to release Movie Gen widely for developer use, as it did with its Llama language models. Instead, Meta plans to collaborate closely with the entertainment industry and other creators, integrating the tool into its own suite of products next year.

The Road Ahead for AI in Media

Movie Gen was developed using a combination of licensed and publicly available datasets, marking a different approach from OpenAI, which has been in talks with Hollywood about partnerships for its Sora tool. So far, no formal agreements have been reported.

The unveiling of Movie Gen underscores the rapid advancements in AI technology, paralleling the release of OpenAI’s Sora earlier this year. Both innovations signal a transformative shift for industries ranging from filmmaking to politics, pushing the boundaries of what’s possible in media creation.

Meta Takes Down 8,000 Scam Ads to Stem “Celeb Bait” Scams with Australian Banks

Meta, the parent company of Facebook and Instagram, has removed around 8,000 “celeb bait” scam ads as part of a new collaboration with Australian banks. These scams often use images of famous personalities, many of which are created by artificial intelligence, to deceive people into investing in fake schemes.

Meta acted after receiving 102 reports since April from the Australian Financial Crimes Exchange, an intelligence-sharing platform led by major banks. These scams are a global issue, but Australia is putting additional pressure on Meta to address the problem, as Prime Minister Anthony Albanese’s government plans to introduce a new anti-scam law by the end of this year.

The proposed law could impose fines of up to A$50 million (around ₹280 crore) on social media, financial, and telecom companies that fail to control these scams. Public consultation for the law ends on October 4.

Scam reports in Australia have surged by nearly 20% in 2023, with total losses reaching A$2.7 billion (₹15,000 crore), according to the Australian Competition and Consumer Commission (ACCC). The ACCC previously sued Meta in 2022, accusing the company of not stopping fake cryptocurrency ads featuring celebrities like Mel Gibson, Russell Crowe, and Nicole Kidman. It estimated that 58% of cryptocurrency ads on Facebook could be scams. Meta is currently contesting the lawsuit, which has yet to go to trial.

In addition, Meta is facing another lawsuit from Australian billionaire Andrew Forrest. Forrest alleges that Meta allowed the spread of thousands of fake cryptocurrency ads on Facebook using his image. He claims Australians have continued to lose money to these scams since he first warned Meta in 2019.

David Agranovich, Meta’s Director of Threat Disruption, said that the initiative with Australian banks is still in its early stages but is showing promise. “A small amount of high-value information is helping us identify larger scam activities,” he said during a media briefing.

When asked about Australia’s proposed anti-scam law, Agranovich said Meta is still reviewing the draft and will share more details later. Rhonda Luo, the Head of Strategy at the Australian Financial Crimes Exchange, emphasized the importance of industry initiatives, saying, “It’s better to act early on scams rather than wait for regulations to take effect.”