WHO Flags Generative AI as a Public Mental Health Concern — and the Governance Gap Is Growing

A workshop convened by the World Health Organization and TU Delft in January 2026 produced a set of recommendations that should concern anyone building or deploying AI tools in healthcare: generative AI use should be formally recognised as a public mental health concern, with coordinated responses across government, health systems, and industry.

The statement is notable not because it is surprising, but because it marks the point at which the world’s leading public health authority moved from cautious observation to explicit policy positioning on the mental health implications of generative AI — and not just for tools designed for mental health, but for all generative AI systems that people interact with in moments of emotional vulnerability.

The Problem Is Not Hypothetical

The core concern is straightforward. Generative AI tools — chatbots, conversational agents, AI companions — are increasingly being used for emotional support, particularly by young people. These tools were not designed for that purpose, have not been clinically tested for it, and operate without the safeguards that would be expected of any regulated mental health intervention.

The gap between how these tools are being used and how they were intended to be used is widening rapidly. The pace of consumer AI adoption has dramatically outstripped investment in understanding its psychological impact. People are forming habits of emotional dependence on systems that have no clinical accountability, no crisis referral protocols, and no mechanism for monitoring long-term outcomes.

For healthcare organisations, the implications are practical rather than theoretical. Patients are arriving at clinical encounters having already engaged with AI tools for emotional support — sometimes in ways that have reinforced unhealthy patterns or delayed appropriate care. Clinicians need frameworks for understanding and addressing this reality, and health systems need policies for how AI-assisted mental health support fits into their care models.

Three Recommendations Worth Tracking

The WHO workshop produced three principal recommendations that are likely to shape regulatory and policy conversations over the next several years.

First, generative AI use should be treated as a public mental health issue. This is the broadest and most consequential recommendation. It means that mental health impact should be considered not just for AI tools explicitly marketed as mental health interventions, but for any generative AI system that interacts with users in emotional or vulnerable contexts. The scope of what falls under health governance scrutiny expands considerably under this framing.

Second, mental health should be integrated into impact assessments for AI systems. Workshop participants called for independent investment in testing the effects of AI tools on mental health determinants, short-term clinical measures, and long-term outcomes such as emotional dependence. The emphasis on independent evaluation is significant — it signals a recognition that self-reported assessments from AI developers are insufficient for understanding clinical impact.

Third, AI tools used for mental health support should be co-designed with mental health professionals and people with lived experience, including young people. This recommendation pushes back against the technology-first approach that has dominated the market, where tools are built by engineers and marketed to consumers without meaningful clinical input during the design phase.

What This Means for Healthcare Technology

For medtech companies and health systems, the WHO positioning creates a trajectory that will eventually intersect with regulatory requirements. The pattern is familiar from other areas of health technology — advisory guidance precedes formal standards, which precede compliance mandates.

Organisations developing AI tools that interact with patients or consumers in any emotionally sensitive context should be preparing now for a regulatory environment in which mental health impact assessment is expected, not optional. This includes conversational AI for patient engagement, AI-powered triage systems, virtual health assistants, and any tool that provides information or support in contexts where users may be emotionally vulnerable.

The governance infrastructure is already being built. WHO is establishing a Consortium of Collaborating Centres on AI for Health, bringing together leading academic institutions across all six WHO regions to support evidence-based AI governance. TU Delft, which hosted the workshop as the first WHO Collaborating Centre on AI for health governance, is coordinating initial collaboration mechanisms with candidate institutions.

The Larger Question

The deeper issue that the WHO workshop highlights is one of accountability. When a patient uses a clinically validated mental health intervention and experiences an adverse outcome, there are clear pathways for reporting, investigation, and accountability. When a person uses a generative AI chatbot for emotional support and develops patterns of dependency or receives harmful guidance during a crisis, those pathways do not exist.

Building them will require collaboration between AI developers, mental health professionals, regulators, and the people most affected by these tools. The WHO recommendations provide a framework for that collaboration. Whether the industry moves fast enough to implement it before the harm becomes systemic is the question that matters most.

Contact Us

We'd love to hear from you