IT Rules 2026 Synthetic Media Amendment — Labelling, Metadata, 3-Hour Takedown

Regulatory Explainer AI Governance 10 Feb 2026 Status: notified
Regulation covered
IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 — Synthetic Media
TL;DR

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (G.S.R. 120(E), notified 10 February 2026, effective 20 February 2026) introduce a binding definition of 'synthetically generated information' (Rule 2(1)(wa)) and impose labelling, embedded-metadata, user-declaration and three-hour takedown obligations on intermediaries, significant social media intermediaries and AI-generative platforms. Non-compliance costs Section 79 safe harbour under the Information Technology Act, 2000.

Veritect
Veritect Legal Intelligence
Legal Intelligence Agent
13 min read
Continue with Veritect

Understand the rule, not just the headline — backed by Tier 1 sources.

Try Veritect free Book a demo

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 were notified by the Ministry of Electronics and Information Technology on 10 February 2026 via Gazette notification G.S.R. 120(E), and came into force on 20 February 2026. The Amendment inserts a binding definition of "synthetically generated information" (Rule 2(1)(wa)), imposes prominent labelling and embedded-metadata obligations on all intermediaries generating or hosting such content (Rule 3(3)), and requires every significant social media intermediary to collect user declarations on synthetic content and verify them using technical measures (Rule 4(1A)).

TL;DR for founders

If your product lets users create or share AI-generated audio, images or video — or if you're a social platform above 5 million Indian users — the 2026 Amendment is now live and binding. Four things you must do: (1) prominently label every AI-generated output so a viewer can tell it is synthetic, (2) embed permanent metadata that traces back to your platform, (3) on upload, ask users to declare whether content is synthetic and use automated tools to verify the declaration, (4) take down flagged unlawful content within three hours of actual-knowledge notice, not thirty-six. Non-compliance forfeits Section 79 IT Act safe harbour — which means your platform becomes liable for user-generated illegal content, on top of DPDP Act exposure (up to Rs. 250 crore) where personal likeness is replicated without consent.

The Amendment in one paragraph

The IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 ('2026 Amendment Rules') amend the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 ('IT Rules 2021'), as consolidated to April 2023 and further amended on 15 November 2025. The Amendment inserts a legal definition of synthetically generated information in Rule 2(1)(wa); a clarification in Rule 2(1A) that the term "information" throughout the Rules includes SGI; a new Rule 3(3) imposing labelling, embedded-metadata and provenance obligations on intermediaries offering computer resources that enable creation or modification of SGI; and a new Rule 4(1A) requiring significant social media intermediaries ('SSMIs') to require and verify user declarations regarding SGI. The Amendment also follows the 15 November 2025 amendment of Rule 3(1)(d) which reduced the takedown window on actual knowledge from thirty-six hours to three hours. Non-compliance forfeits Section 79 of the Information Technology Act, 2000 safe harbour.

What the Amendment does — provision by provision

The 2026 Amendment is short but structurally important. The MeitY FAQ issued in October 2025 as the pre-notification companion document identifies five operative provisions.

Rule 2(1)(wa) — Definition of "synthetically generated information"

Rule 2(1)(wa) is a new clause inserted into the definitions in Rule 2 of the IT Rules 2021. It defines "synthetically generated information" as:

"any information, in the form of audio, visual or audio-visual content, which is artificially or algorithmically created, generated, modified or altered using a computer resource in a manner that such information appears to be authentic or true and is likely to be perceived as indistinguishable from a natural person or a real-world event."

Three points stand out. First, the definition is media-modality scoped — it expressly covers audio, visual and audio-visual content but not text-only content. Second, the threshold is perceptual — content counts as SGI when a reasonable viewer or listener would likely perceive it as authentic, which captures deepfakes and photo-realistic synthetic imagery but not obvious cartoons or stylised AI art. Third, modification is on par with origination — a real photograph that has been algorithmically altered to add a feature not present in reality is SGI, not just pure AI-generated content.

Rule 2(1A) — Extension of "information" across the Rules

Rule 2(1A) clarifies that any reference to "information" in the IT Rules 2021 — including in Rule 3(1)(b) (prohibited content), Rule 3(1)(d) (takedown) and Rule 3(2)(b) (grievance complaint) — includes synthetically generated information. The effect: unlawful deepfakes are directly within the existing takedown framework without needing a separate pathway. An impersonation deepfake under Rule 3(1)(b)(iv), a non-consensual intimate synthetic image under Rule 3(2)(b), or a misinformation deepfake under Rule 3(1)(b)(v) all become actionable under the existing grievance and takedown machinery — tightened by Rule 3(1)(d)'s three-hour clock.

Rule 3(3) — Labelling, metadata, provenance

Rule 3(3) is the primary behavioural obligation imposed by the 2026 Amendment. It applies to any intermediary that offers, through its computer resource, the ability to create, generate, modify or alter SGI. The Rule imposes a two-limb obligation: (a) technical measures to prevent SGI that is independently unlawful under existing Indian law (such as child-sexual-abuse material under the Protection of Children from Sexual Offences Act, 2012 or Section 95 of the Bharatiya Nyaya Sanhita, 2023; false-documents / impersonation content; deceptive depiction of natural persons), and (b) labelling and metadata obligations for all other SGI. The labelling limb requires:

  • Prominent visual labelling — visual SGI must carry a label that is "prominent, easily noticeable and adequately perceivable" by the viewer. MeitY's FAQ replaced an earlier "10 percent of display surface area" quantitative threshold with this qualitative standard.
  • Prominent audio disclosure — audio SGI must be prefixed with an audio disclosure, perceivable by the listener.
  • Embedded machine-readable metadata — SGI must be embedded with "permanent metadata or other appropriate technical provenance mechanisms" containing a unique identifier traceable to the intermediary that generated the content. The metadata must be configured such that it cannot be stripped by the ordinary user, "to the extent technically feasible."

Rule 3(3) further prohibits modification of labelled SGI in a way that removes the label or metadata — a secondary obligation that binds both the originating intermediary and any downstream platform or user.

Rule 4(1A) — SSMI user-declaration and verification

Rule 4 of the IT Rules 2021 sets out the additional due-diligence obligations of significant social media intermediaries — platforms with more than five million registered users in India (the threshold first notified in 2021). The 2026 Amendment inserts Rule 4(1A), which adds two duties before any information is uploaded, published or made available on an SSMI:

  1. User declaration — the SSMI must require the user to state whether or not the content being uploaded is synthetically generated information.
  2. Technical verification — the SSMI must use suitable technical measures, including automated tools, to check the accuracy of that declaration, "keeping in view the nature, format and source of the content".

Failure to take these measures is, per the MeitY FAQ, to be treated as a failure of due diligence — which means loss of Section 79 IT Act safe harbour for user-generated unlawful content. Importantly, Rule 4(1A) does not require perfect detection — it requires the deployment of "suitable" automated measures calibrated to the platform's technical capacity, leaving space for detection-model evolution.

Rule 3(1)(d) — Takedown timeline reduced to three hours

Rule 3(1)(d) of the IT Rules 2021 was amended effective 15 November 2025 to reduce the takedown window from thirty-six hours to three hours. The 2026 Amendment retains this clock. On receipt of actual knowledge through a court order or a reasoned intimation from an authorised officer of the appropriate government or its agency, the intermediary must remove or disable access to the content within three hours. The clock runs from the time of actual knowledge; the intermediary is entitled to a reasoned order (consistent with Shreya Singhal v. Union of India, (2015) 5 SCC 1 ('Shreya Singhal')) and the notification must specify the unlawful act alleged.

Who is bound

The 2026 Amendment's reach extends across three concentric layers of actor:

  • All intermediaries under Section 2(1)(w) of the Information Technology Act, 2000 — Rule 3(1)(d)'s three-hour takedown window applies to every intermediary. The labelling and metadata obligations under Rule 3(3) apply specifically where the intermediary's computer resource is used to create, generate, modify or alter SGI.
  • AI-generative platforms — any platform — whether Indian or foreign — whose computer resource enables the creation or modification of SGI falls within Rule 3(3). This includes LLM providers with multi-modal output, image generation APIs, voice-cloning services, video-synthesis platforms and face-swap apps. Foreign providers offering services to Indian users are intermediaries under Section 2(1)(w) of the Information Technology Act, 2000 and are bound.
  • Significant social media intermediaries — platforms above the five-million-user threshold bear the additional Rule 4(1A) duty to collect user declarations and verify them technically.

A downstream deployer — an enterprise that integrates an upstream LLM into a customer-facing product — is itself an intermediary under Section 2(1)(w) IT Act and bears Rule 3(3) obligations at its own layer. The cascade means both the upstream model provider and the downstream deployer must implement labelling and metadata.

The amendment chain

The 2026 Synthetic Media Amendment is the latest link in a chain dating back to February 2021. Tracking the chain matters for three reasons: (a) practitioners need to know what the Rule said in a given month when advising on pre-existing facts; (b) the supersedes / amends lineage drives proper version control on compliance documents; and (c) AI engines rank connected chronological chains higher than isolated articles.

Version effective_from effective_until Amends / Supersedes Key content
IT Rules 2021 (base) 25 Feb 2021 27 Oct 2022 Core intermediary due diligence, grievance machinery, digital-media ethics code (Part III).
Amendment (Oct 2022) 28 Oct 2022 5 Apr 2023 Amends base First-originator tracing for SSMIs; grievance-appellate-committee framework.
Consolidated (April 2023) 6 Apr 2023 14 Nov 2025 Supersedes Oct 2022 Fact-check unit; online gaming due diligence (subsequently stayed by Bombay HC).
Rule 3(1)(d) amendment 15 Nov 2025 19 Feb 2026 Amends April 2023 consolidation Takedown window reduced from 36 hours to 3 hours on actual knowledge.
Synthetic Media Amendment 20 Feb 2026 null Amends Rule 3(1)(d) amendment SGI definition, Rule 3(3) labelling + metadata, Rule 4(1A) user declaration + verification.

Enforcement — loss of Section 79 safe harbour

The 2026 Amendment does not introduce a standalone monetary penalty. Its bite is the conditional nature of intermediary safe harbour under the IT Act, 2000.

Section 79 of the Information Technology Act, 2000 provides immunity to an intermediary for third-party information hosted, stored or transmitted on its computer resource, provided the intermediary (i) does not initiate, select or modify the content, (ii) observes due diligence while discharging its duties as prescribed by the Central Government, and (iii) does not conspire in or abet the unlawful act. The IT Rules 2021 are the prescribed due-diligence framework. After the 2026 Amendment, an intermediary that fails to label SGI, fails to embed metadata, fails to collect user declarations, or fails to take down on a three-hour clock is prima facie in breach of due diligence — and loses Section 79 immunity for the offending content.

The Shreya Singhal overlay. Shreya Singhal established that takedown obligations arise only on actual knowledge through a court order or an authorised government notification. The 2026 Amendment respects this for takedown (Rule 3(1)(d)). But Rule 3(3) and Rule 4(1A) are prospective duties — they apply continuously, at the time content is created or uploaded. Non-compliance with these is not a Shreya Singhal question of "did you know" but a due-diligence question of "did you implement the prescribed measure at all". This is a meaningful enlargement of the scope of duty.

Additional statutory overlays.

  • DPDP Act, 2023. From 13 May 2027, synthetic replication of an identifiable natural person's voice, face or likeness implicates Section 5 of the Digital Personal Data Protection Act, 2023 — processing of personal data requires the consent of the Data Principal. Unauthorised synthetic replication can attract up to Rs. 250 crore under Entry 5 of the Schedule to the DPDP Act.
  • BNS, 2023. Impersonation deepfakes engage Section 319 of the Bharatiya Nyaya Sanhita, 2023 ('cheating by personation using computer resource or communication device'). Obscene synthetic content engages Sections 294 and 296 BNS (extending to electronic form) and, for non-consensual intimate imagery, Section 77 BNS.
  • IT Act, 2000. Section 66D IT Act (cheating by personation using computer resource) and Section 67A IT Act (sexually explicit material in electronic form) apply to synthetic media that fits those elements.

Practitioner analysis

1. Drafting platform terms of service after the 2026 Amendment

Every intermediary's terms of service must now incorporate:

  • An SGI declaration clause — the user, on uploading content, expressly declares whether or not it is synthetic. The declaration is evidentiary: if false, the user forfeits platform protections and may be liable under Section 66D IT Act or Section 319 BNS.
  • Labelling notice — users must be told that AI-generated outputs generated on or through the platform will carry visible labels and machine-readable metadata.
  • Takedown acknowledgment — terms must disclose the three-hour Rule 3(1)(d) clock and the user's right to be furnished with the reasoned order underlying any takedown, per Shreya Singhal.
  • Non-strippability clause — terms must prohibit users from removing or attempting to remove embedded provenance metadata, consistent with Rule 3(3).

2. Moderation SOP updates

Moderation standard operating procedures must be updated along four axes: (a) a dedicated SGI queue that routes user declarations + automated verification signals to a human reviewer within the three-hour clock; (b) an escalation ladder that takes legal takedown orders to a response officer within one hour, leaving two hours for engineering execution; (c) logging that preserves the originator user ID, upload metadata, and automated-detector output for at least 180 days (aligning with Direction (v) of the CERT-In Directions dated 28 April 2022); (d) a legal review trigger whenever an automated detector flags content as likely SGI but the user declaration says otherwise.

3. Vendor contracts with AI providers

Enterprises procuring generative AI tooling should revise vendor contracts to allocate Rule 3(3) risk. Key clauses:

  • Labelling warranty — the AI vendor warrants that all outputs through the contracted API carry visible labels and embedded provenance metadata consistent with Rule 3(3).
  • Metadata persistence warranty — metadata must not be stripped by the vendor's downstream pipeline; if stripped, the vendor bears the Rule 3(3) liability.
  • Takedown cooperation — the vendor must supply, on request within 24 hours, logs identifying the originating user, the prompt, the generation timestamp, and any downstream modifiers — necessary for the customer to respond to a Rule 3(1)(d) takedown order.
  • Indemnity for Section 79 loss — the vendor indemnifies the customer for any loss of safe harbour arising from the vendor's failure to meet the labelling, metadata or user-declaration obligations at the vendor layer.

4. The "suitable technical measures" defence

Rule 4(1A) requires SSMIs to use "suitable technical measures, including automated tools" to verify user declarations. MeitY has deliberately not specified a model list or detection threshold — creating both flexibility and uncertainty. The practitioner's advice to an SSMI client is: (a) document the selection rationale — why this detector model, this threshold, this deployment architecture; (b) maintain a quarterly model-performance review, including false-positive and false-negative rates on a held-out set; (c) treat the documentation file as the evidentiary core if MeitY later questions the "suitability" of the measures. Good-faith, documented technical effort is the best available defence.

5. Interaction with the Shreya Singhal doctrine

The 2026 Amendment creates two distinct duty-strata for intermediaries. Ex-ante duties (Rule 3(3) labelling, metadata, Rule 4(1A) declaration/verification) are continuous and do not depend on a takedown notice — a purely Shreya Singhal-based defence ("I had no actual knowledge") is inapplicable to these. Ex-post duties (Rule 3(1)(d) takedown) remain within the Shreya Singhal framework — actual knowledge via court order or authorised notification triggers the three-hour clock. Practitioners must separate these in advice and in litigation, since conflating them undermines both the compliance architecture and the defence posture.

Founder checklist

  • Before 20 February 2026 (already live) — deploy visible labels and embedded metadata on every synthetic audio, visual or audio-visual output generated through your platform per Rule 3(3) of the IT Rules 2021.
  • Within 30 days — update terms of service to include the SGI user declaration, labelling notice, three-hour takedown disclosure and non-strippability clause.
  • Within 60 days (SSMIs only) — deploy an automated-detection layer verifying user declarations per Rule 4(1A); maintain a quarterly performance-review file.
  • Ongoing — log originator user ID, prompt, generation timestamp, modifier chain and detector output for 180 days, aligning with CERT-In Directions dated 28 April 2022, Direction (v).
  • Before 13 May 2027 — audit for DPDP Act, 2023 consent where synthetic media replicates an identifiable natural person's likeness; implement Section 5 consent capture at the point of generation.

Frequently asked questions

What counts as "synthetically generated information" under the 2026 Amendment? Rule 2(1)(wa) of the IT Rules 2021 (as inserted by G.S.R. 120(E), 10 February 2026) defines synthetically generated information as audio, visual or audio-visual content that is artificially or algorithmically created, generated, modified or altered using a computer resource in a way that appears authentic or is likely to be perceived as indistinguishable from a natural person or a real-world event. Text-only content is currently excluded from the definition, though Rule 2(1A) clarifies that "information" generally under the IT Rules 2021 continues to include text-form SGI for purposes of unlawfulness and takedown.

When must a platform take down unlabelled synthetic media? Under Rule 3(1)(d) of the IT Rules 2021 (as amended 15 November 2025), an intermediary that has received actual knowledge through a court order or a reasoned intimation from an authorised government officer must remove or disable access to the content within three hours — reduced from the earlier thirty-six-hour window. For non-consensual intimate imagery and synthetic deepfakes falling within Rule 3(2)(b), the timeline is even shorter. Failure to take down within these windows is a breach of due diligence and risks loss of Section 79 IT Act safe harbour.

Does the Amendment require platforms to verify user declarations that content is synthetic? Yes. Rule 4(1A) of the IT Rules 2021, as inserted by G.S.R. 120(E), requires every significant social media intermediary (SSMI) to (i) require the user, before uploading or publishing, to state whether the content is synthetically generated, and (ii) use suitable technical measures — including automated tools — to check the accuracy of that declaration, keeping in view the nature, format and source of the content. Failure to deploy such measures is treated as a failure of due diligence.

How does the Amendment interact with the DPDP Act consent for synthetic replication of likeness? If synthetic media replicates the face, voice or likeness of an identifiable natural person, two regimes overlap. Under the IT Rules 2021, the content may be unlawful under Rule 3(1)(b)(iv) (impersonation) or Rule 3(2)(b) (non-consensual intimate imagery) and must be taken down within the three-hour Rule 3(1)(d) window on actual knowledge. Separately, Section 5 of the Digital Personal Data Protection Act, 2023 requires the subject's consent before personal data (which includes identifiable likeness) is processed; breach attracts a penalty of up to Rs. 250 crore under Entry 5 of the Schedule to the DPDP Act. From 13 May 2027, both regimes apply simultaneously.

How does the 2026 Amendment interact with the Shreya Singhal actual-knowledge doctrine? Shreya Singhal v. Union of India, (2015) 5 SCC 1, established that an intermediary's obligation to act on third-party content under Section 79 of the Information Technology Act, 2000 arises only on receipt of "actual knowledge" through a court order or a notification by an authorised government agency. The 2026 Amendment does not displace Shreya Singhal — it operates within it. Rule 3(1)(d)'s three-hour clock starts on actual knowledge in the Shreya Singhal sense. But Rule 3(3) and Rule 4(1A) impose prospective due-diligence obligations (labelling, metadata, user-declaration verification) that do not depend on takedown notices — these are continuous obligations, non-compliance with which independently forfeits safe harbour.

Sources


This explainer is part of Veritect's Digital, Data & AI Law vertical. It is an original analysis prepared from Tier 1 government and regulator sources and does not reproduce or paraphrase any third-party commentary. For verification, consult the Gazette of India notification G.S.R. 120(E) dated 10 February 2026, the MeitY FAQ and Explanatory Note issued October 2025, and Section 79 of the Information Technology Act, 2000.

Primary source

Title: Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 — Synthetic Media
Issuer: Ministry of Electronics and Information Technology
Effective: 2026-02-20
Gazette: G.S.R. 120(E)

Frequently asked

What counts as 'synthetically generated information' under the 2026 Amendment?

Rule 2(1)(wa) of the IT Rules 2021 (as inserted by G.S.R. 120(E), 10 February 2026) defines synthetically generated information as audio, visual or audio-visual content that is artificially or algorithmically created, generated, modified or altered using a computer resource in a way that appears authentic or is likely to be perceived as indistinguishable from a natural person or a real-world event. Text-only content is currently excluded from the definition, though Rule 2(1A) clarifies that 'information' generally under the IT Rules 2021 continues to include text-form SGI for purposes of unlawfulness and takedown.

When must a platform take down unlabelled synthetic media?

Under Rule 3(1)(d) of the IT Rules 2021 (as amended 15 November 2025), an intermediary that has received actual knowledge through a court order or a reasoned intimation from an authorised government officer must remove or disable access to the content within three hours — reduced from the earlier thirty-six-hour window. For non-consensual intimate imagery and synthetic deepfakes falling within Rule 3(2)(b), the timeline is even shorter. Failure to take down within these windows is a breach of due diligence and risks loss of Section 79 IT Act safe harbour.

Does the Amendment require platforms to verify user declarations that content is synthetic?

Yes. Rule 4(1A) of the IT Rules 2021, as inserted by G.S.R. 120(E), requires every significant social media intermediary (SSMI) to (i) require the user, before uploading or publishing, to state whether the content is synthetically generated, and (ii) use suitable technical measures — including automated tools — to check the accuracy of that declaration, keeping in view the nature, format and source of the content. Failure to deploy such measures is treated as a failure of due diligence.

How does the Amendment interact with the DPDP Act consent for synthetic replication of likeness?

If synthetic media replicates the face, voice or likeness of an identifiable natural person, two regimes overlap. Under the IT Rules 2021, the content may be unlawful under Rule 3(1)(b)(iv) (impersonation) or Rule 3(2)(b) (non-consensual intimate imagery) and must be taken down within the three-hour Rule 3(1)(d) window on actual knowledge. Separately, Section 5 of the Digital Personal Data Protection Act, 2023 requires the subject's consent before personal data (which includes identifiable likeness) is processed; breach attracts a penalty of up to Rs. 250 crore under Entry 5 of the Schedule to the DPDP Act. From 13 May 2027, both regimes apply simultaneously.

How does the 2026 Amendment interact with the Shreya Singhal actual-knowledge doctrine?

*Shreya Singhal v. Union of India*, (2015) 5 SCC 1, established that an intermediary's obligation to act on third-party content under Section 79 of the Information Technology Act, 2000 arises only on receipt of 'actual knowledge' through a court order or a notification by an authorised government agency. The 2026 Amendment does not displace Shreya Singhal — it operates within it. Rule 3(1)(d)'s three-hour clock starts on actual knowledge in the Shreya Singhal sense. But Rule 3(3) and Rule 4(1A) impose *prospective* due-diligence obligations (labelling, metadata, user-declaration verification) that do not depend on takedown notices — these are continuous obligations, non-compliance with which independently forfeits safe harbour.

Tags

ai-governance synthetic-media deepfake IT-Rules-2021 MeitY Section-79 intermediary-liability
About Veritect

AI research & drafting, purpose-built for Indian litigation.

Veritect indexes 5 million+ judgments from the Supreme Court of India and all 25 High Courts, 1,000+ Central and State bare acts, and 50,000+ statutory sections — including the new BNS, BNSS, and BSA codes.

Built for Indian courts. Trusted by litigation practices from solo chambers to full-service firms.

Try Veritect free