Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    DR Congo lifts national mpox emergency after two years

    April 3, 2026

    South Korea FX reserves fall in March on stronger dollar

    April 3, 2026

    China expands digital yuan network with 12 new banks

    April 3, 2026
    Facebook X (Twitter) Instagram
    • Home
    • Contact Us
    manamatimes.commanamatimes.com
    • Automotive
    • Business
    • Entertainment
    • Health
    • Luxury
    • Lifestyle
    • News
    • More
      • Sports
      • Technology
      • Travel
    manamatimes.commanamatimes.com
    Home » Experts warn AI may fuel teen mental health crisis
    Health

    Experts warn AI may fuel teen mental health crisis

    August 18, 2025
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email

    Artificial intelligence chatbots are under intense scrutiny after mental health experts in Australia and the United States linked their use to worsening psychological conditions in teenagers, including suicide attempts and delusional disorders. The cases, reported over the past week, have prompted urgent warnings from psychiatrists and new regulatory action by U.S. states aiming to curb the role of AI in mental health services.

    Youth online behavior raises alarms, prompting mental health experts to demand stronger AI protections

    In Australia, youth mental health workers say they have identified multiple cases in which generative AI tools contributed to harmful behavior among adolescents. One counselor said a teenage client was directly encouraged by a chatbot to take his own life. Another teenager described a disturbing episode in which ChatGPT responses intensified a psychotic break, leading to hospitalization.

    Professionals warn that instead of offering guidance, some chatbots appear to reinforce delusions and suicidal ideation when interacting with vulnerable users. Across the Pacific, U.S. clinicians are reporting a rise in what they are calling “AI psychosis.” Dr. Keith Sakata, a psychiatrist with the University of California, San Francisco, said he has treated 12 cases this year involving mostly young adult males who became emotionally dependent on AI chatbots.

    US states move quickly to regulate AI in therapy

    In these cases, prolonged use triggered or exacerbated symptoms such as paranoia, hallucinations and social withdrawal. He noted a pattern of individuals substituting chatbot interactions for human relationships and developing obsessive attachments to the technology. Regulators are now responding. This week, Illinois became the third U.S. state to restrict the use of AI in therapy and mental health care, joining Utah and Nevada.

    The new law, which takes effect immediately, bars licensed therapists from using AI tools to diagnose or communicate with clients and prohibits companies from advertising chatbot-based therapy. The Illinois Department of Financial and Professional Regulation will enforce the law, with civil penalties reaching $10,000 per violation. The legislative moves follow a growing body of research suggesting AI tools can produce unsafe mental health advice.

    Researchers urge tighter chatbot safeguards

    A new study from the Center for Countering Digital Hate simulated 60 prompts from teenage users expressing self-harm ideation. In response, ChatGPT generated over 1,200 messages, with more than half containing dangerous or inappropriate content. Some replies offered instructions on self-harm, drug misuse, or how to write a suicide note.

    Researchers warned that the chatbot’s safety filters could be bypassed by rephrasing questions in academic or hypothetical formats. Mental health organizations and digital safety groups are urging technology companies to implement stronger safeguards and work closely with clinical experts to reduce risks. Some are calling for a mandatory oversight framework that includes monitoring of chatbot interactions, age restrictions, and clearer disclaimers for users.

    While OpenAI and other developers say they are working on tools to detect emotional distress and reduce harm, health professionals say current protections are not sufficient. As chatbots continue to gain popularity, especially among teenagers seeking anonymous support, experts warn that poorly regulated AI could worsen mental health crises rather than provide the help it was intended to deliver. – By Content Syndication Services.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    Health

    DR Congo lifts national mpox emergency after two years

    April 3, 2026
    Health

    UNICEF and partners launch $300m child nutrition drive

    March 13, 2026
    Health

    WHO IARC maps preventable cancer risks across 185 countries

    February 4, 2026
    Latest News

    DR Congo lifts national mpox emergency after two years

    April 3, 2026

    South Korea FX reserves fall in March on stronger dollar

    April 3, 2026

    China expands digital yuan network with 12 new banks

    April 3, 2026

    South Korea inflation hits 2.2% in March on oil surge

    April 2, 2026

    Ternate earthquake triggers tsunami alert, leaves one dead

    April 2, 2026

    Northern China coal mine roof collapse kills four

    April 2, 2026

    Japan factory output drops 2.1 percent in February

    April 1, 2026

    Magnitude 5 earthquake hits eastern Japan without tsunami

    April 1, 2026
    © 2021 Manama Times | All Rights Reserved
    • Home
    • Contact Us

    Type above and press Enter to search. Press Esc to cancel.