Responsible AI in Healthcare: Why Governance, Not Intent, Defines Clinical Safety
By Dr. Sharad Maheshwari, Consultant, Radiologist (Abdominal Imaging), Kokilaben Dhirubhai Ambani Hospital, Mumbai
Artificial intelligence is already being used in hospitals worldwide. For eg. it helps decide who gets seen first in the emergency room, flags abnormal lab results, and can even suggest treatments for various diseases.
As these systems grow more powerful, the conversation about safety remains stuck in vague phrases like "ethical AI" and "trustworthy AI." That gap is not academic. In medicine, gaps cost lives. This is where the current definition of “Responsible AI” falls short. Responsibility in healthcare is not about intent or design principle, it is about control at the point of care and we call that Governance.
The Real Problem: Probability vs. Responsibility
Here is the main issue that needs to be addressed. Medicine demands actionable decisions, whereas AI produces probabilities. A doctor diagnoses pneumonia, documents it, and is legally responsible for the outcome.
An AI system does something fundamentally different. It estimates likelihood, a 91% chance of infection, an 82% probability of cancer. In that moment, responsibility does not sit with the algorithm. It sits with the doctor. The machine influences the decision, the human owns the consequences, and in healthcare, authority cannot be probabilistic if responsibility is deterministic.
The Failures You Don't See
Healthcare AI does not usually fail dramatically. It fails quietly, the way a car's brakes wear down gradually rather than snapping all at once. An algorithm trained in 2024 may degrade by 2026 as patient populations shift or hospital workflows change. No alarms go off.
The system continues to run, just less accurately, until harm accumulates. Then there is a more subtle threat: Invisible AI. These are systems that reorder patient queues, prioritise alerts, and auto-summarise clinical notes. They don't formally "diagnose." They influence, but in healthcare, influence is enough.
A queue algorithm that consistently delays atypical heart attack presentations, has effectively made a clinical decision, without a single person legally accountable for the outcome.
What Safety Actually Requires
Safe AI in healthcare is not about good intentions baked in at design. It is about control at runtime, when the model is running. A clinician must be able to override the system instantly and without friction. Every AI-influenced decision must be permanently traceable and auditable. The system must detect its own degradation before harm occurs, not after. And legal responsibility must remain clearly human and pre-defined. The algorithm's complexity cannot function as a shield in court. These are not design preferences. They are clinical safety requirements. A system that cannot be overridden, audited, and stopped is not "high risk." It is ungovernable.
India's Opportunity & Its Risk
India is at a pivotal moment. The DPDP Act defines patient data rights with fines up to ₹250 crore for violations, and the IndiaAI Mission combined with the Ayushman Bharat Digital Mission is building one of the largest digital health ecosystems on earth, already linking health records for over 859 million people.
But data governance is not the same as decision governance. Linking 800 million health records is powerful. Deploying AI on top of that without control mechanisms is dangerous at a scale no healthcare system has attempted.
The model to watch is India's AIIMS Clinical Decision Support System, explicitly constrained, auditable, and clinically subordinate. If India standardises that principle across its entire health infrastructure, it does not just protect its own patients, it sets the global benchmark, especially for global south. If it doesn't, it scales risk at population level.
The Question That Actually Matters
The debate around healthcare AI is framed incorrectly. The question is not whether an AI system is ethical. The real question is, can it be controlled when it is wrong? Because in medicine, accuracy is statistical. Even the best model will fail.
But safety is structural. And if no one can intervene, trace, and take responsibility when that failure happens, then the system was never safe to deploy in the first place.
Stay tuned for more such updates on Digital Health News