Release time: 2026-03-01 14:50
Our call for clearer AI regulation in mental health
News from BACP
We highlighted need for transparency, accountability, and public protection in response to MHRA consultation.
We’ve called for significant reform of the UK’s regulatory framework for artificial intelligence (AI) in healthcare, warning that current arrangements do not provide sufficient clarity or safeguards, particularly in mental health.
The call comes in our response to the Medicines and Healthcare Products Regulatory Agency (MHRA) Consultation – Regulation of AI in Healthcare, in which we emphasised that while AI has the potential to support innovation, trust, safety, and professional accountability must come first.
We said:
“AI is being introduced into healthcare at pace, but regulation has not yet caught up with the complexity of how these tools are being used. Clear and credible regulation is essential to protect both service users and professionals”
Defining AI is a top priority
We also highlighted that the consultation did not provide a clear definition of AI, making it difficult to determine which technologies fall under regulation.
We believe it’s vital to be clear about what constitutes AI in healthcare, differentiating between tools used in clinical care, administrative tasks, and those used by the public outside formal healthcare.
Without this clarity, commissioners and professionals cannot be confident about which tools are safe or regulated.
Rising concerns among therapists
Our Mindometer survey’s recent findings illustrate why reform and clarity is urgently needed:
64% of therapists reported a decline in public mental health over the past year, with 43% linking AI technologies as a contributing factor.
28% noticed clients receiving unhelpful advice from AI tools such as ChatGPT.
Among therapists working with children and young people, 38% observed a rise in children seeking mental health guidance from AI chatbots, while 19% reported cases where children received harmful advice.
These findings support our call for clear, accessible information for both professionals and the public about which AI mental heal tools are safe, evidence-based, and regulated.
Martin Bell, our Head of Policy and Public Affairs said: "It’s understandable that people are increasingly turning to AI for therapy because it’s available 24/7, feels non-judgemental, and offers a sense of privacy. However, AI isn’t bound by ethical or confidentiality standards and, at the moment, lacks meaningful regulation and accountability.
"While such tools can offer support, they can never replicate or replace the human touch of therapy. Human connection, authenticity, empathy, and compassion are at the heart of successful therapy — qualities only a trained therapist can provide.”
Our key recommendations
Transparency over which AI tools are regulated and how they make decisions.
Shared accountability and liability, clearly defining responsibilities across developers, healthcare organisations, and clinicians
Ongoing training and capability building for healthcare professionals to understand AI’s strengths, limitations, and risks.
Robust post-market surveillance, including monitoring adverse incidents and adapting regulation as AI technologies evolve.
