Healthcare presents both the greatest promise and the greatest peril for artificial intelligence. Diagnostic algorithms that detect cancer earlier than human radiologists could save countless lives. Algorithms that misdiagnose, that embed bias against certain populations, or that fail at critical moments could cause immeasurable harm. India's regulatory framework for healthcare AI attempts to balance innovation and safety, but the framework remains incomplete and the balance uncertain.
The Medical Device Regulatory Framework
The Central Drugs Standard Control Organisation regulates medical devices under the Medical Devices Rules, 2017. These rules classify devices by risk level and impose registration and conformity assessment requirements accordingly. AI-based diagnostic software can qualify as a medical device when it is intended for use in diagnosis, prevention, monitoring, or treatment of disease. The classification depends on the software's intended use and risk profile.
Class A devices, presenting lowest risk, require only conformity self-declaration. Class B devices require manufacturer quality management certification. Classes C and D, presenting higher risks, require more intensive regulatory scrutiny including clinical evidence. An AI system intended to triage emergency patients would likely fall into higher risk categories requiring substantial clinical validation. An AI system providing general wellness recommendations might escape device regulation entirely.
Software as a Medical Device
The concept of Software as a Medical Device presents particular challenges for AI. Traditional medical device regulation assumes relatively static products: a device is manufactured, tested, approved, and deployed. AI systems, particularly those employing machine learning, can evolve continuously as they encounter new data. The approved version may differ substantially from the version in clinical use six months later.
Regulators globally are grappling with this challenge. The United States FDA has proposed predetermined change control plans that would allow manufacturers to make specified modifications without seeking new approval. India's CDSCO has not yet articulated a comparable framework, leaving manufacturers uncertain about their obligations when models improve or adapt. The cautious approach is to seek guidance for any material modification, but this approach may impede beneficial innovation.
The Digital Health Mission
The Ayushman Bharat Digital Mission establishes a national digital health ecosystem including unique health identifiers, personal health records, and health information exchange. AI applications within this ecosystem must comply with the Health Data Management Policy, which imposes data protection requirements extending beyond the DPDPA in healthcare-specific ways. The policy recognises health data as sensitive personal data requiring enhanced protection.
AI systems accessing health data through the Digital Health Mission infrastructure must register with the Health Information Exchange and Consent Manager. They must obtain informed consent for data access and processing. They must maintain audit logs of data access. These requirements create compliance obligations that pure medical device regulation does not impose. Healthcare AI developers face dual regulatory tracks: device regulation and data regulation.
Clinical Validation Challenges
Medical device approval typically requires clinical evidence demonstrating safety and efficacy. For AI diagnostic tools, generating this evidence presents distinctive challenges. The gold standard of randomised controlled trials may be impractical or unethical for diagnostic tools. Retrospective studies using historical data face questions about dataset representativeness and temporal validity. Prospective studies face questions about ground truth when AI may outperform human diagnosticians.
The Indian population presents additional validation challenges. AI systems trained primarily on Western populations may perform differently on Indian populations due to genetic diversity, disease prevalence variations, and healthcare system differences. Regulatory expectations around population-specific validation are evolving but point toward requirements for Indian-specific testing before deployment in Indian healthcare settings.
Liability and Accountability
When an AI diagnostic tool misses a cancer that a competent human radiologist would have detected, who bears liability? The manufacturer, for producing a defective product? The healthcare provider, for relying on inadequate technology? The physician, for failing to exercise independent judgment? The regulatory framework provides incomplete answers to these questions.
Medical negligence law in India requires demonstration of duty, breach, causation, and damage. AI complicates each element. The duty of care when using AI tools remains undefined. Whether reliance on AI constitutes breach depends on whether such reliance meets the standard of reasonable medical practice, a standard that AI is itself changing. Causation in AI contexts involves complex questions about whether human oversight would have prevented harm. Prudent healthcare providers document their AI use carefully, maintain human oversight protocols, and ensure that AI recommendations receive clinical review.
Emerging Regulatory Initiatives
Recognising the inadequacy of existing frameworks for healthcare AI, Indian regulators have launched several initiatives. NITI Aayog's Responsible AI principles include healthcare-specific guidance. ICMR has issued ethical guidelines for biomedical and health research involving AI. State-level health authorities are experimenting with AI deployment in public health contexts, generating real-world evidence about implementation challenges.
The regulatory sandbox concept, successfully deployed in financial services, is being explored for healthcare AI. A sandbox would allow developers to test AI applications in controlled clinical environments with regulatory oversight but relaxed compliance requirements. Learning from sandbox deployments would inform the development of permanent regulatory frameworks. For healthcare AI innovators, participation in sandbox initiatives may offer both regulatory clarity and competitive advantage.