MHRA Opens Consultation on Future Regulation of AI in Healthcare
The call for evidence runs from 18 December 2025 to 2 February 2026 and is open to submissions from the public, patients, healthcare professionals, technology companies and healthcare providers.
The UK’s Medicines and Healthcare products Regulatory Agency (MHRA) has launched a public call for evidence on how artificial intelligence in healthcare should be regulated, seeking views from patients, clinicians, industry, healthcare providers and the wider public to inform future policy.
The initiative has been launched to support the work of the newly established National Commission into the Regulation of AI in Healthcare, which advises the MHRA on the long-term direction of health AI regulation.
The consultation aims to ensure that AI technologies used across the NHS and the wider healthcare system are safe, effective, proportionate to risk and capable of supporting innovation.
The MHRA has said the exercise is open to anyone, regardless of their familiarity with AI in healthcare, and is intended to capture a broad range of perspectives on what rules, safeguards, and responsibilities should govern the use of AI-driven tools.
The information gathered will feed into the Commission’s recommendations to the MHRA in 2026.
Artificial intelligence is already being deployed across healthcare, from diagnostics and screening to workflow optimisation and patient-facing tools. As these technologies become more advanced and adaptive, regulators are under pressure to ensure existing frameworks remain fit for purpose while not stifling innovation.
The MHRA’s call for evidence focuses on whether current regulatory rules need updating, how emerging risks can be identified and addressed quickly, and how accountability should be shared between regulators, developers, healthcare organisations and users.
Chief Executive of the MHRA Lawrence Tallon said, “AI is already revolutionising our lives, both its possibilities and its capabilities are ever-expanding, and as we continue into this new world, we must ensure that its use in healthcare is safe, risk-proportionate and engenders public trust and confidence. We want everyone to have the chance to help shape the safest and most advanced AI-enabled healthcare system in the world at this truly pivotal moment.”
The Commission is chaired by Professor Alastair Denniston, head of the UK’s Centre of Excellence in Regulatory Science in AI and Digital Health (CERSI-AI), who highlighted the need to look beyond technical performance and focus on real-world use.
He said, “We are starting to see how AI health technologies could benefit patients, the wider NHS and the country as a whole. But we are also needing to rethink our safeguards. This is not just about the technology ‘in the box’, it is about how the technology works in the real world.
It is about how AI is used by health professionals or directly by patients, and how it is regulated and used safely by a complex healthcare system such as the NHS.”
Patient safety has been positioned as a central theme of the consultation.
Professor Henrietta Hughes, Patient Safety Commissioner for England and deputy chair of the Commission, stressed the importance of public input, saying, “Patients bear the direct consequences of AI healthcare decisions, from diagnostic accuracy to privacy and treatment access. The lived experience and views of patients and the public are vital in identifying potential risks and opportunities that technologists and clinicians may miss.”
The call for evidence runs from 18 December 2025 to 2 February 2026 and is open to submissions from the public, patients, healthcare professionals, technology companies, and healthcare providers.
Stay tuned for more such updates on Digital Health News