A deepfake of an Indian user today exposes its creator, its deployer and the hosting platform to four parallel liability tracks: criminal prosecution under Section 66D of the Information Technology Act, 2000 (cheating by personation, three years and Rs. 1 lakh fine) and under Sections 318, 319, 336 and 356 of the Bharatiya Nyaya Sanhita, 2023; a civil suit for personality-rights injunction and damages grounded in Article 21 of the Constitution and K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1; regulatory action under Rule 3(1)(d) and Rule 3(3) of the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, as amended by G.S.R. 120(E) of 10 February 2026, effective 20 February 2026; and penalties up to Rs. 250 crore under the Digital Personal Data Protection Act, 2023 where personal data is involved.
TL;DR for founders
If your AI product can generate an image, video or voice that looks or sounds like a real person, you are touching India's multi-track deepfake liability regime. Criminal exposure lives in the IT Act (Section 66D — three-year sentence) and the new BNS (Section 319 cheating by personation carries up to five years; Section 318 cheating up to seven years; Section 336 forgery up to two years; Section 356 defamation; Section 294 obscenity). Civil exposure — personality-rights injunctions — is live as of the Delhi High Court's Anil Kapoor order of September 2023. Regulators (MeitY, CERT-In, DPDP Board) add takedown, labelling and consent obligations with three-hour takedown clocks after 20 February 2026. Watermark every synthetic output, label it, keep originator metadata for 180 days, build a DPDP-grade consent flow for any training on personal likeness, and publish a takedown SOP your grievance officer can execute.
What counts as a "deepfake" / synthetically generated information
The Information Technology Act, 2000 ('IT Act') does not itself define deepfake. The statutory anchor sits in the IT Rules 2021, as amended by the Synthetic Media Amendment Rules, 2026 ('Synthetic Media Amendment') notified by MeitY via G.S.R. 120(E) dated 10 February 2026 and in force from 20 February 2026. The Synthetic Media Amendment defines "synthetically generated information" as audio, visual or audio-visual material that is artificially generated and is likely to be perceived as indistinguishable from a natural person or a real-world event. The MeitY FAQ accompanying the amendment clarifies three features:
- Synthetic information includes fully generated content and modified content where the alteration changes the perceived identity of a natural person or the reality of a real-world event.
- The labelling and watermarking obligation applies regardless of commercial or non-commercial intent.
- Purely textual content is outside the definition; the regime is calibrated to audio, image, video and audio-visual media.
For criminal and civil purposes, the term "deepfake" is used as a practitioner shorthand for synthetic audio-visual content that impersonates a natural person. The offence is not named "deepfake" in any statute — the prosecution runs on general cheating-personation, forgery, defamation, obscenity and privacy-invasion provisions.
Track 1 — Criminal liability
Section 66D Information Technology Act, 2000 — cheating by personation using a computer resource
Section 66D IT Act punishes any person who "by means of any communication device or computer resource cheats by personation". The penalty is imprisonment for up to three years and a fine of up to Rs. 1 lakh. This is the workhorse charge for monetised deepfake fraud — celebrity impersonation scams, AI-voice CEO-fraud, fake investment advisor videos.
Bharatiya Nyaya Sanhita, 2023 — the BNS layer
The Bharatiya Nyaya Sanhita, 2023 ('BNS') replaced the Indian Penal Code, 1860 with effect from 1 July 2024 and is the substantive criminal layer that runs alongside the IT Act. Five BNS provisions are decisive in deepfake prosecutions:
| Section | Offence | Maximum punishment |
|---|---|---|
| Section 318 BNS | Cheating (general) | Up to 3 years or fine; breach of duty up to 5 years; involving property up to 7 years |
| Section 319 BNS | Cheating by personation | Up to 5 years, or fine, or both |
| Section 336 BNS | Forgery (including electronic records) | Up to 2 years, or fine, or both |
| Section 356 BNS | Defamation | As prescribed under the section |
| Section 294 BNS | Sale / public exhibition of obscene matter | Up to 2 years on first conviction; up to 5 years on subsequent |
Section 319 BNS is structurally close to Section 66D IT Act but carries a higher maximum sentence (five years against three). Section 336 BNS codifies forgery for both paper and electronic records — a deepfake that alters an electronic record to deceive is forgery within Section 336. Section 356 BNS continues the defamation regime in substantive terms comparable to Section 499 IPC. Section 294 BNS captures deepfake pornography where it falls within the obscenity threshold.
IT Act obscenity provisions — Section 67 and Section 67A
Where a deepfake is sexually explicit, Section 67 IT Act (transmitting obscene material in electronic form — imprisonment up to three years and fine up to Rs. 5 lakh on first conviction; up to five years and Rs. 10 lakh on subsequent conviction) and Section 67A IT Act (sexually explicit material — up to five years and Rs. 10 lakh on first conviction; up to seven years and Rs. 10 lakh on subsequent) apply. For children, Section 67B IT Act and the Protection of Children from Sexual Offences Act, 2012 add further offences.
Procedural anchor
Cognisable offences are registered at the jurisdictional police station. Section 176 of the Bharatiya Nagarik Suraksha Sanhita, 2023 ('BNSS') prescribes FIR registration in cognisable cases. Investigation for technology-enabled offences typically routes through state cyber-crime cells; the Indian Cyber Crime Coordination Centre ('I4C') under the Ministry of Home Affairs operates the National Cyber Crime Reporting Portal (cybercrime.gov.in) as a victim-facing intake.
Track 2 — Civil liability (personality and publicity rights)
India has no codified personality-rights statute. Protection is a judicial construct built from Article 21 of the Constitution (informational privacy after K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1 ('Puttaswamy')) layered on common-law publicity and passing-off principles.
Puttaswamy — the constitutional foundation
The nine-judge Constitution Bench in Puttaswamy held that the right to privacy — including informational privacy — is intrinsic to Article 21 and the freedoms guaranteed in Part III of the Constitution. The judgment, delivered on 24 August 2017, provides the jurisprudential hook for subsequent HC injunctions in personality-rights litigation. While fundamental rights are primarily enforceable against the State, Indian courts have extended Puttaswamy's reasoning to private-party action through equitable jurisdiction.
Anil Kapoor v. Simply Life India and Ors — the Delhi HC precedent
In Mr. Anil Kapoor v. Simply Life India and Ors, CS(COMM) 652/2023, the Delhi High Court granted an ex-parte omnibus injunction restraining 16 named defendants and "the world at large" from unauthorised use of the plaintiff's name, likeness, image, voice, dialogues and persona — including through AI tools, face morphing and GIFs — for financial or commercial purposes. The order is available on the Delhi High Court's own judgment portal (CS(COMM) 652/2023, 20 September 2023). The order is regularly cited for two propositions: (i) personality rights are a recognisable civil entitlement in India, and (ii) deepfake and AI-generated misuse of a celebrity's persona is squarely within the scope of that entitlement.
Civil reliefs typically sought
Personality-rights plaintiffs typically seek: permanent injunction (often preceded by ex-parte ad interim relief), damages for commercial exploitation, rendition of accounts, and delivery-up / destruction of infringing material. Where the deepfake is hosted on intermediaries, plaintiffs typically serve the order on the intermediaries invoking Section 79 IT Act and the Rule 3(1)(d) takedown mechanism to operationalise the injunction.
Track 3 — Regulatory liability
IT Rules 2021 — Rule 3(1)(d) takedown obligation
Rule 3(1)(d) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 ('IT Rules 2021') requires intermediaries to remove or disable access to information on receipt of actual knowledge — by way of a court order or appropriate-government notification — that the information is being used to commit an unlawful act. The Synthetic Media Amendment reduces the takedown window to three hours for broad unlawful content and two hours for non-consensual intimate imagery and child sexual abuse material. The grievance-officer acknowledgement clock under Rule 3(2) (24 hours) and the resolution clock (15 days) remain applicable to user-submitted grievances.
IT Rules 2021 — Rule 3(3) synthetic-media labelling (as amended, in force 20 February 2026)
Rule 3(3) IT Rules 2021, as amended by the Synthetic Media Amendment, requires that synthetically generated information be labelled or marked such that the synthetic nature is prominent, easily noticeable and adequately perceivable. Implementation specifics include:
- For video: a visible on-screen watermark.
- For audio: a spoken disclaimer at the beginning of the audio file.
- For image: a prominent label.
- For all synthetic content: embedded metadata identifying the synthetic nature and, where feasible, the creator or first originator.
The earlier draft's fixed-percentage label-size requirement was dropped in the final notification; the standard is now qualitative (prominent, easily noticeable, adequately perceivable) rather than quantitative.
MeitY Advisory dated 1 March 2024 (as revised 15 March 2024)
MeitY's AI Advisory dated 1 March 2024 — reiterated in the revised Advisory dated 15 March 2024 — directs all intermediaries and platforms to (i) label under-tested or unreliable AI output, (ii) implement a consent pop-up before user interaction, (iii) label deepfake and synthetic content, and (iv) retain metadata identifying the originator and subsequent modifiers. Although the Advisory is not a gazette-notified rule, it structures the factual matrix within which "due diligence" under Section 79(2)(c) IT Act is assessed. See Veritect's separate explainer on the MeitY AI Advisory for the full chain from the 26 December 2023 PIB release (PRID 1990542) to the 2026 Amendment.
DPDP Act 2023 — consent for use of personal data
Section 6 of the Digital Personal Data Protection Act, 2023 ('DPDP Act') requires that consent for processing personal data be free, specific, informed, unconditional and unambiguous, with a clear affirmative action. "Personal data" under Section 2(t) DPDP Act includes any data about an identifiable individual — a deepfake by definition trains on, or generates content that identifies, a natural person, and therefore processes personal data. Where such processing lacks Section 5 and Section 6 DPDP Act consent (or a Section 7 legitimate use), the Data Principal has rights under Section 11 DPDP Act including the right to obtain information, correction, and erasure. Penalties under the Schedule to the DPDP Act reach Rs. 250 crore per instance of failure, assessed by the Data Protection Board.
Track 4 — Constitutional track
Article 21 — informational privacy
Article 21 of the Constitution of India guarantees the right to life and personal liberty, which Puttaswamy interpreted to include informational privacy. A deepfake that misuses a person's likeness intrudes on that informational privacy. The right is enforceable against the State through writ proceedings and has been judicially extended into private-party equity in personality-rights cases.
Article 19(1)(a) — free speech tension
Article 19(1)(a) protects the right to freedom of speech and expression, subject to Article 19(2) reasonable restrictions including public order, decency or morality, defamation, and incitement to an offence. Satirical, parodic and political-speech deepfakes involve genuine Article 19(1)(a) considerations. Courts apply proportionality — the more commercial, unlabelled and fraudulently purposed the deepfake, the weaker the speech defence; the more labelled, political, and non-commercial, the stronger.
Intermediary safe harbour — Section 79 IT Act and Shreya Singhal
Section 79 IT Act grants intermediaries immunity for third-party content conditional on (a) not initiating, selecting or modifying the information, (b) observing due diligence under the IT Rules 2021, and (c) not conspiring in or abetting unlawful acts. In Shreya Singhal v. Union of India, (2015) 5 SCC 1 ('Shreya Singhal'), the Supreme Court read down Section 79 to require an "actual knowledge" trigger — a court order or a notification by an authorised government agency. General user flagging, without more, does not constitute actual knowledge.
Three practical consequences follow once actual knowledge is triggered:
- Takedown clock starts: three hours for Rule 3(1)(d) unlawful content after 20 February 2026; two hours for non-consensual intimate imagery and CSAM.
- Evidence preservation: the intermediary should capture URL, timestamp, upload-metadata, originator identifiers and technical logs before any takedown — both for its own Section 79 defence and for the investigating agency's chain-of-custody requirements.
- Log-retention: CERT-In Directions dated 28 April 2022 ('CERT-In Directions 2022'), Direction (v), require intermediaries to retain ICT-system logs for 180 days. That window is the minimum evidence horizon against which a deepfake incident must be reconstructed.
Who is liable — graded obligations
Creator
The individual or system operator who generates the deepfake is the primary obligor on criminal tracks (Section 66D IT Act; Sections 318, 319, 336, 356, 294 BNS; Section 67 / 67A IT Act). Creator liability also activates tortious causes of action on the civil track.
Deployer
The person or entity that distributes or integrates the deepfake into a public-facing product inherits intermediary obligations under Section 79 IT Act and the IT Rules 2021. Deployer liability is typically joint with the creator where both contributed to generation or publication.
Platform
The hosting intermediary — social media, cloud-storage, CDN, messaging service — is protected by Section 79 IT Act until actual knowledge is triggered, after which it becomes a primary obligor for continued hosting. Labelling and watermarking obligations under Rule 3(3) apply at the platform layer irrespective of knowledge, to the extent the platform "makes available" or "transmits" synthetically generated information.
Remedies — the victim's sequencing playbook
- FIR at the jurisdictional cyber-crime police station citing Section 66D IT Act and the relevant BNS sections (318, 319, 336, 356, 294 as applicable), with preservation requests for the URL, platform logs and hash of the content.
- Civil suit for permanent injunction and damages — typically framed as a personality-rights action in a Commercial Court or High Court, with a prayer for ex-parte ad interim relief pending hearing.
- Takedown notice to the hosting intermediary invoking Rule 3(1)(d) IT Rules 2021, enclosing the FIR copy (as appropriate-government or law-enforcement notification) or the court order from Step 2 — triggering the three-hour clock post-20 February 2026.
- DPDP Act complaint to the Data Protection Board under Section 27 DPDP Act where training data or content involves personal data processed without Section 5 / Section 6 consent.
- Writ before the High Court under Article 226 for systemic or State-actor involvement — particularly where law-enforcement inaction is itself an independent grievance.
Practitioner analysis
1. Advising a victim — parallel-track sequencing
A victim should not wait for a conviction to pursue a takedown. Parallel filing is the norm. The practical sequencing is: preserve evidence (screen-capture with URL, timestamp, notarised hash), file FIR same-day, send a 24-hour preservation notice to the platform, file the civil suit within 72 hours for ex-parte ad interim relief, and serve the order on the platform as actual knowledge. The takedown order becomes the operative document the intermediary cannot refuse to implement under the Rule 3(1)(d) three-hour clock.
2. Advising a platform — takedown SOP and knowledge triggers
A compliant intermediary SOP has six components: a single intake channel for court orders and government notifications, a 24x7 grievance officer per Rule 3(2), a three-hour action SLA on qualifying notices (two hours for non-consensual intimate imagery and CSAM), automated evidence capture (URL, timestamp, hash, upload metadata) at the moment of action, 180-day log retention under CERT-In Directions 2022, and periodic board-level review of safe-harbour posture. The SOP should distinguish between user flags (which do not trigger Shreya Singhal actual knowledge) and court orders / government notifications (which do). Overreaction to user flags creates Article 19(1)(a) liability; under-reaction to court orders forfeits Section 79 IT Act safe harbour.
3. Advising an AI product team — consent, watermarking, audit logs
Product teams building generative-AI features should hardwire five controls: (i) a DPDP-grade consent flow at the likeness-capture or training stage with Section 6 DPDP Act–quality records; (ii) visible watermarks and embedded metadata for all synthetic output under Rule 3(3) IT Rules 2021; (iii) model cards documenting training-data provenance and bias/reliability tests; (iv) per-generation audit logs capturing prompt hash, user identifier and timestamp, retained for at least 180 days under the CERT-In Directions 2022; and (v) a pre-deployment bias and electoral-integrity impact assessment as anticipated by the MeitY Advisory dated 1 March 2024.
4. Contractual allocation — B2B deepfake exposure
Enterprise customers integrating a generative-AI API from a vendor should extract: (i) indemnity for IP and personality-rights claims arising from model training data, (ii) warranty of Rule 3(3) labelling and watermarking support at the output layer, (iii) audit rights over training-data provenance, (iv) takedown cooperation in line with the three-hour Rule 3(1)(d) clock, and (v) a contractual SLA aligned to CERT-In Directions 2022 incident-reporting windows. Vendors seeking to limit exposure should separately capture consent, label outputs, and publish a transparent content-moderation policy.
5. Satire and fair-use positioning
Clients seeking to produce clearly satirical or political deepfakes should (i) apply Rule 3(3) labelling and watermarking in full — labelling is not a speech restriction but transparency; (ii) avoid monetisation framings that resemble endorsement or commercial exploitation; (iii) document the public-interest rationale; and (iv) preserve Section 6 DPDP Act consent trails where personal data is processed. A satire defence is stronger when the synthetic nature is disclosed and the commercial exploitation element is absent.
Founder checklist
- This quarter — write and publish your synthetic-content labelling and watermarking SOP; make it discoverable on your site and train your product team against it.
- By 20 February 2026 — make your grievance officer three-hour-ready for Rule 3(1)(d) court-order takedowns and two-hour-ready for non-consensual intimate imagery / CSAM categories.
- Before next release — integrate C2PA-style provenance metadata, consent-capture at likeness training, and prompt-hash logging retained for at least 180 days per Direction (v) of the CERT-In Directions dated 28 April 2022.
- Contract terms — update vendor and customer contracts with deepfake indemnities, Rule 3(3) labelling warranties and Rule 3(1)(d) takedown cooperation language.
- Insurance — confirm that your tech-E&O and cyber policies cover personality-rights and Section 66D IT Act exposure, including defence costs for ex-parte injunction hearings.
Frequently asked questions
Can a victim use the Bharatiya Nyaya Sanhita and the IT Act simultaneously against a deepfake? Yes. Section 71 of the Bharatiya Nyaya Sanhita, 2023 preserves proceedings under other laws, and Section 81 of the Information Technology Act, 2000 expressly provides an overriding effect for IT Act offences but does not displace prosecution under other statutes where the conduct independently constitutes an offence. In practice, a deepfake-impersonation FIR typically aligns Section 66D IT Act (cheating by personation using a computer resource, up to three years and Rs. 1 lakh fine) with Section 318 and Section 319 BNS (cheating and cheating by personation, up to seven and five years respectively), Section 336 BNS (forgery, up to two years), Section 356 BNS (defamation), and — where sexually explicit — Section 67 / Section 67A IT Act and Section 294 BNS. The investigating officer aggregates cognisable offences; prosecution proceeds on the charge-sheet framed by the Magistrate.
Does consent cure deepfake liability? Consent of the data principal can eliminate the civil personality-rights claim and can satisfy the lawful-processing requirement under Section 6 of the Digital Personal Data Protection Act, 2023 ('DPDP Act'), but it does not cure every track. Criminal liability under Section 66D of the Information Technology Act, 2000 turns on deceit of the viewer, not on the identity of the subject — impersonating a consenting celebrity to defraud a third party remains cheating. Regulatory obligations under Rule 3(1)(d) and Rule 3(3) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (as amended 10 February 2026) are owed by intermediaries irrespective of subject consent — the labelling and takedown duties attach to the synthetic nature of the content, not to whether the depicted person permitted its creation.
What is a platform's duty of care when it receives notice of a deepfake? Section 79 of the Information Technology Act, 2000 grants intermediary safe harbour conditional on observance of IT Rules 2021 due diligence. Under Shreya Singhal v. Union of India, (2015) 5 SCC 1, the safe harbour narrows to an "actual knowledge" test — actual knowledge means a court order or a government notification. Rule 3(1)(d) of the IT Rules 2021, as amended with effect from 20 February 2026, requires takedown within three hours of a court order or appropriate-government direction for broad unlawful content, with a two-hour window for non-consensual intimate imagery and CSAM. A voluntarily received grievance under Rule 3(2) must be acknowledged within 24 hours and resolved within 15 days. Platforms should preserve the content and surrounding metadata for at least 180 days under Direction (v) of the CERT-In Directions dated 28 April 2022 and treat knowledge as a step-function event that triggers evidence-preservation and takedown SOPs.
Does Puttaswamy create a private cause of action for victims of deepfakes? K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1 recognised the right to privacy — including informational privacy — as a fundamental right protected under Article 21 of the Constitution. Fundamental rights are primarily enforceable against the State through writ proceedings under Articles 32 and 226. However, Indian High Courts — including the Delhi High Court in Anil Kapoor v. Simply Life India and Ors, CS(COMM) 652/2023 (20 September 2023) — have consistently injuncted private actors' misuse of a person's likeness in deepfake contexts, grounding the civil remedy in a combination of common-law personality and publicity rights, tortious passing-off, and the jurisprudential foundation laid in Puttaswamy. A deepfake victim therefore typically combines: (i) a civil suit for injunction and damages in a competent commercial court or HC, (ii) a writ for data-protection and state-action angles, and (iii) an FIR under the IT Act and BNS.
What about satirical or fair-use deepfakes? India has no codified fair-use defence for personality rights. Article 19(1)(a) of the Constitution protects freedom of speech and expression including satire and parody, subject to Article 19(2) reasonable restrictions (public order, decency, defamation, sovereignty and integrity, contempt of court, incitement to offence). Courts apply a proportionality analysis — satirical deepfakes of public figures on matters of public interest, clearly labelled as synthetic, have a stronger defence than monetised, unlabelled, commercially-exploitative deepfakes. Rule 3(3) of the IT Rules 2021 (as amended 10 February 2026) requires prominent labelling and watermarking of synthetically generated content regardless of whether it is satirical; the labelling obligation is not itself a speech restriction but a transparency requirement. Satire does not defeat Section 66D IT Act if the intent is to deceive viewers about the identity of the speaker for gain.
Sources
- Information Technology Act, 2000 — India Code
- Bharatiya Nyaya Sanhita, 2023 — India Code
- Digital Personal Data Protection Act, 2023 — India Code
- IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 — MeitY consolidated
- MeitY FAQ on Synthetic Media Amendment (October 2025)
- MeitY Explanatory Note on Synthetic Media (22 October 2025)
- PIB — MeitY Advisory on Deepfakes (PRID 1990542, 26 December 2023)
- Constitution of India — India Code
- Delhi High Court order — Anil Kapoor v. Simply Life India and Ors, CS(COMM) 652/2023 (20 September 2023)
- CERT-In Directions under Section 70B(6) IT Act (28 April 2022)
This explainer is part of Veritect's Digital, Data & AI Law vertical. It is an original analysis prepared exclusively from Tier 1 government and court sources — the Ministry of Electronics and Information Technology, the Press Information Bureau, the Delhi High Court's judgment portal, CERT-In, and India Code — and does not reproduce or paraphrase any third-party commentary. For verification, consult indiacode.nic.in for the statutory text, the MeitY FAQ on the Synthetic Media Amendment, and the Delhi High Court's judgment portal for the Anil Kapoor order.