Algorithmic Liability in India: Who Pays When AI Goes Wrong?

Civil Law Section 79 Section 356 Section 11 Consumer Protection Act, 2019 Indian Contract Act
Veritect
Veritect AI
Deep Research Agent
9 min read
Continue with Veritect

Build a chronology of Civil Law matters in seconds with VeriScribe.

Try Veritect free Book a demo

Executive Summary

India lacks dedicated AI liability legislation, yet AI systems are increasingly deployed in high-stakes domains: healthcare diagnostics, credit decisions, autonomous vehicles, and judicial assistance. When these algorithms fail, existing tort, contract, and consumer protection frameworks must stretch to accommodate novel harms. This article maps available legal remedies with sector-specific case studies.

Key Framework:

  • Product Liability: Consumer Protection Act, 2019
  • Negligence: Traditional tort principles
  • Contractual Liability: Indian Contract Act breach claims
  • Statutory: IT Act, sector-specific regulations
  • Emerging: Karnataka Gig Workers Act algorithmic accountability provisions

Introduction

On a single day in 2025, an AI system might:

  • Deny a loan application based on biased training data
  • Misdiagnose a medical condition from an X-ray
  • Cause a delivery robot to injure a pedestrian
  • Generate defamatory content about an individual

Each scenario raises the same question: Who pays when AI goes wrong?

Without dedicated legislation, Indian courts must apply existing frameworks to algorithmic harms - a square peg in round hole problem that this article attempts to navigate.

Section 1: The Liability Gap

Why Algorithms Create Unique Challenges

Traditional Liability Algorithmic Challenge
Identifiable actor Multiple parties in AI chain
Traceable causation "Black box" decision-making
Foreseeable harm Emergent behaviors unpredictable
Intentional/negligent conduct No "intent" in machines
Individual harm Systematic bias affects groups

The AI Value Chain

Data Providers → Model Developers → Platform Providers → Deployers → End Users
     ↑                                                                    ↓
     └────────────────── Harm occurs ──────────────────────────────────────┘
                         Who is liable?

Each participant may contribute to algorithmic failure:

  • Data providers: Biased or incomplete training data
  • Model developers: Flawed algorithm design
  • Platform providers: Inadequate testing/monitoring
  • Deployers: Inappropriate use cases
  • End users: Misuse or over-reliance

A. Consumer Protection Act, 2019

Product Liability (Chapter VI):

The CPA 2019 holds manufacturers, service providers, and sellers liable for defective products/services causing harm.

Element Application to AI
Product defect AI system as "product" with manufacturing, design, or warning defect
Service deficiency AI-powered service failing to meet reasonable standards
Harm Physical injury, financial loss, mental distress
Causation AI decision must cause the harm

Limitations:

  • "Product" definition may not clearly cover software/algorithms
  • Proving defect in complex AI systems challenging
  • No strict liability; negligence standard applies

B. Tort Law (Negligence)

Elements Required:

  1. Duty of care: Did deployer owe duty to affected party?
  2. Breach: Did AI system fall below reasonable standard?
  3. Causation: Did breach cause the harm?
  4. Damages: Quantifiable loss occurred

AI-Specific Challenges:

  • What is "reasonable standard" for AI performance?
  • How to prove breach when algorithm is proprietary?
  • Causation through "black box" systems

C. Indian Contract Act, 1872

Breach of Contract:

  • AI service contracts with performance warranties
  • Implied warranties of fitness for purpose
  • Limitation of liability clauses (often heavily negotiated)

Fraud and Misrepresentation (Sections 17-19):

  • If AI vendor misrepresents capabilities
  • Marketing claims versus actual performance

D. IT Act, 2000

Relevant Provisions:

Section Provision AI Application
43 Unauthorized access/damage AI security breaches
43A Body corporate data protection AI processing personal data
66 Computer-related offences AI-enabled fraud
79 Intermediary liability AI platform safe harbors

E. Sector-Specific Regulations

Healthcare: Medical Devices Rules, 2017 (AI as SaMD) Finance: RBI guidelines on digital lending, credit scoring Insurance: IRDAI guidance on AI underwriting Telecom: TRAI regulations on AI customer service

Section 3: Sector Case Studies

Case Study 1: Healthcare AI Misdiagnosis

Scenario: AI diagnostic tool misses cancer in X-ray; delayed treatment causes Stage IV progression

Potential Claims:

Against Theory Challenges
Hospital Negligence, CPA service deficiency Was AI use reasonable standard of care?
AI Vendor Product liability Is diagnostic software a "medical device"?
Radiologist Professional negligence Did doctor over-rely on AI?

Current Framework Assessment:

  • Medical Devices Rules, 2017 amended to include AI/ML-based Software as Medical Device (SaMD)
  • CDSCO approval required for clinical AI tools
  • But enforcement and liability rules underdeveloped

Liability Allocation:

  • Primary: Hospital (vicarious liability)
  • Secondary: AI vendor (if device defective)
  • Contributory: Radiologist (if verification duty breached)

Case Study 2: Algorithmic Lending Discrimination

Scenario: AI credit scoring system systematically denies loans to certain demographics despite creditworthiness

Potential Claims:

Against Theory Challenges
Bank/NBFC RBI Fair Practices Code violation Proving algorithmic bias
AI Vendor Breach of contract, negligence Access to algorithm for audit
Regulatory DPDP Act violations Once operational

Current Framework Assessment:

  • RBI 2021 Digital Lending Guidelines require transparency
  • DPDP Act 2023 (when operative) will require explanation of automated decisions
  • No specific algorithmic audit requirements yet

Emerging Protection: RBI's 2023 guidelines on digital lending require:

  • Disclosure of AI/ML use in credit decisions
  • Grievance redressal for rejected applications
  • But no right to algorithmic explanation yet

Case Study 3: Autonomous Vehicle Accident

Scenario: Self-driving delivery robot injures pedestrian on Bangalore sidewalk

Potential Claims:

Against Theory Challenges
Robot operator Motor Vehicles Act (if applicable), negligence Is robot a "vehicle"?
Manufacturer Product liability Design vs manufacturing defect
Software provider Negligence Proximate cause issues

Current Framework Assessment:

  • Motor Vehicles Act, 1988 doesn't address autonomous vehicles
  • No robotics-specific legislation
  • General negligence principles must apply

Gap Analysis:

  • No mandatory insurance for autonomous systems
  • No safety certification standards
  • Victim faces burden of proving defect

Case Study 4: AI-Generated Defamation

Scenario: Generative AI creates false, defamatory content about an individual that goes viral

Potential Claims:

Against Theory Challenges
Platform IT Act intermediary liability Safe harbor under Section 79?
User who prompted Defamation, IT Act Identifying responsible party
AI company Negligence, product liability Foreseeability of harm

Current Framework Assessment:

  • IT Rules 2021 require takedown within 36 hours
  • BNS 2023 Section 356 (defamation) applies to publisher
  • But who "publishes" AI-generated content?

Section 4: Emerging Algorithmic Accountability

Karnataka Gig Workers Act, 2025

India's first legislation with explicit algorithmic accountability provisions:

Key Requirements:

  • Algorithmic systems must be transparent and non-discriminatory
  • 14-day notice before modification or termination based on algorithmic decisions
  • Human points of contact for algorithmic grievances
  • Disclosure of operational models and algorithmic management systems

Precedential Value: While limited to gig platforms, these principles may extend to other sectors.

Source: Karnataka Gig Workers Act Analysis

DPDP Act 2023 - Automated Decision Rights

Once fully operative, Section 11 provides:

  • Right to information about automated decisions
  • Right to correction of data affecting such decisions
  • Grievance redressal mechanism

Limitation: No explicit right to algorithmic explanation or human review of automated decisions.

Section 5: Proposed Liability Framework

Risk-Based Categorization

Risk Level Examples Liability Standard
Unacceptable Social scoring, subliminal manipulation Prohibited
High Medical diagnosis, credit scoring, recruitment Strict liability
Limited Chatbots, spam filters Negligence standard
Minimal Video games, inventory management Contractual only

Burden of Proof Allocation

Current: Victim must prove AI caused harm (extremely difficult)

Proposed Shift:

  • High-risk AI: Rebuttable presumption of causation if harm occurs
  • Provider must prove system not defective
  • Audit trail requirements to enable proof

Mandatory Insurance

For high-risk AI deployments:

  • Compulsory liability insurance
  • Insurance pool for uninsured/underinsured claims
  • No-fault compensation for certain AI harms

Section 6: Practical Recommendations

For AI Deployers

  1. Conduct AI Impact Assessments before deployment
  2. Document decision-making logic for auditability
  3. Establish human oversight for high-stakes decisions
  4. Create grievance mechanisms for affected parties
  5. Obtain appropriate insurance coverage
  6. Include liability allocation in vendor contracts
  7. Monitor for bias and drift post-deployment

For Affected Parties

  1. Document the harm with specificity
  2. Identify all parties in AI value chain
  3. Preserve evidence of AI involvement
  4. Consider multiple claims: CPA, negligence, contract, statutory
  5. Seek expert assistance for technical aspects
  6. File regulatory complaints where applicable
  1. Build technical understanding of AI systems
  2. Develop discovery strategies for algorithmic evidence
  3. Identify appropriate experts for AI matters
  4. Track international developments for persuasive precedents
  5. Engage in policy advocacy for legislative reform

Conclusion

India's algorithmic liability framework is a patchwork of general laws stretched to cover AI-specific harms. While this provides some remedies, significant gaps remain:

Covered:

  • Product defects under CPA (with limitations)
  • Negligence claims (with causation challenges)
  • Contractual breaches
  • Sector-specific violations

Gaps:

  • No strict liability for high-risk AI
  • No algorithmic transparency requirements (except gig workers)
  • No mandatory explainability
  • No dedicated compensation mechanisms

The Way Forward:

  1. Dedicated AI liability legislation
  2. Risk-based regulatory framework
  3. Algorithmic audit requirements
  4. Mandatory insurance for high-risk deployments
  5. Specialized dispute resolution mechanisms

Until then, creative application of existing frameworks - combined with contractual protections - remains the primary recourse for AI harms.

Sources

Written by
Veritect. AI
Deep Research Agent
Grounded in millions of verified judgments sourced directly from authoritative Indian courts — Supreme Court & all 25 High Courts.
About Veritect

AI research & drafting, purpose-built for Indian litigation.

Veritect indexes 5 million+ judgments from the Supreme Court of India and all 25 High Courts, 1,000+ Central and State bare acts, and 50,000+ statutory sections — including the new BNS, BNSS, and BSA codes.

Built for Indian courts. Trusted by litigation practices from solo chambers to full-service firms.

Try Veritect free