On 1 March 2024, the Ministry of Electronics and Information Technology issued an advisory to every intermediary and platform in India requiring — for the first time in any government instrument worldwide — that "explicit permission of the Government of India" be obtained before deploying any under-tested or unreliable Artificial Intelligence model, Generative AI model or Large Language Model for users in India. The Advisory also mandated labelling of AI output and traceability of deepfakes. A revision dated 15 March 2024 softened the permission requirement but retained the labelling, consent and originator-identification obligations.
TL;DR for founders
If you run an AI product used by Indians — chatbot, image generator, voice-clone, LLM API — MeitY's March 2024 Advisory is the policy document you must internalise even though it is not a gazette rule. The core message: (a) flag anything "under-tested" to users, (b) label AI outputs, (c) keep metadata that identifies who created or modified deepfakes, (d) do not let your product produce content that breaks Rule 3(1)(b) of the IT Rules 2021 (misinformation, impersonation, obscenity, etc.). The 15 March 2024 revision dropped the pre-deployment permission requirement, but everything else stuck. Treat this as the policy bridge between the 26 Dec 2023 deepfake advisory and the Feb 2026 Synthetic Media Amendment Rules. Loss of safe harbour under Section 79 of the IT Act 2000 is the real penalty.
The Advisory in one paragraph
The MeitY Advisory dated 1 March 2024 ('the 1 March Advisory') is an executive instrument issued under the Ministry's supervisory powers relating to the Information Technology Act, 2000 ('IT Act') and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 ('IT Rules 2021'). It directed every intermediary and platform to (i) ensure AI systems did not generate or permit bias, discrimination or threats to electoral integrity; (ii) obtain explicit prior Government of India permission before deploying "under-tested / unreliable" AI models, LLMs or generative AI to Indian users; (iii) label such outputs to warn users of "possible inherent fallibility or unreliability"; (iv) implement a consent-based pop-up mechanism before users interacted with the system; and (v) ensure labelling and metadata traceability of deepfake or synthetic content to its creator, modifier and intermediary. A revised Advisory dated 15 March 2024 ('the 15 March Advisory') removed the explicit-permission requirement but extended the labelling and metadata obligations to all AI intermediaries and platforms.
The 26 December 2023 precursor — why MeitY escalated
The March 2024 Advisory did not land in a vacuum. On 26 December 2023, MeitY had issued — through a PIB release (PRID 1990542) — an advisory to all intermediaries reiterating compliance with Rule 3(1)(b) of the IT Rules 2021, specifically in the context of misinformation and AI-generated deepfakes. That release emphasised the following operative obligations on intermediaries:
- Clear communication of prohibited content. Users must be informed, through terms of service and user agreements, of the categories of content prohibited under Rule 3(1)(b) — which includes defamatory, obscene, pornographic, paedophilic content, content invasive of another's privacy including bodily privacy, misleading information, and impersonation content.
- Penal consequences. Terms of service must highlight that violations of Rule 3(1)(b) can attract penal provisions under the Bharatiya Nyaya Sanhita, 2023 (previously the Indian Penal Code, 1860) and the IT Act.
- Reporting obligation. Platforms must report legal violations to law-enforcement agencies under applicable Indian laws.
The December 2023 Advisory followed a widely reported incident involving an AI-generated deepfake video of a film actor that circulated on social media in November 2023, and a series of "Digital India Dialogues" held by the Minister of State for Electronics & IT with intermediary representatives. The Minister expressed public dissatisfaction at the pace of industry response — which is why the 1 March 2024 Advisory escalated from "comply with existing rules" to "obtain explicit Government permission."
What the 1 March 2024 Advisory said
The 1 March Advisory is structured as a set of directions to intermediaries and platforms. Its operative clauses, synthesised from the Advisory's quoted language reported across Tier 2 sources and confirmed against the PIB precursor and subsequent MeitY materials, are:
- Bias, discrimination and electoral-integrity duty. Intermediaries and platforms must ensure that the use of AI models / LLMs / generative AI / software or algorithms on or through their computer resource "does not permit any bias or discrimination or threaten the integrity of the electoral process."
- Prior permission for under-tested AI. The use of under-tested or unreliable AI models / LLMs / generative AI / software or algorithms on the Indian internet "must be done only after appropriately labelling the possible and inherent fallibility or unreliability of the output generated." Crucially, deployment "is being made available to the users on Indian Internet only with the explicit permission of the Government of India."
- Consent-based user warning. Users must be informed about "the possible and inherent unreliability of the output generated" through a "consent popup" before interaction.
- Labelling of deepfake / synthetic content. Any content that is potentially misinformation or a deepfake generated using the intermediary's computer resource must be labelled or embedded with permanent unique metadata identifying the information as synthetically generated or modified.
- Originator identification. Metadata must identify "the user of such computer resource that has caused any change or modification to such information" and must be configured such that the creator or first originator of misinformation or deepfake content can be traced.
- Action-taken report. Intermediaries were required to submit an "Action Taken-cum-Status Report" to MeitY within 15 days of the Advisory.
The 1 March Advisory was issued to all "significant social media intermediaries" — a term of art under Rule 2(1)(v) of the IT Rules 2021 defined by the five-million-registered-user threshold notified in February 2021 — and, in its scope, to "all Intermediaries and Platforms" offering AI services.
The 15 March 2024 revision — what was softened
Within fourteen days of the 1 March Advisory, industry response — particularly from Indian start-ups and from overseas generative AI providers — was that the "explicit permission" language was unworkable, since MeitY had no standing machinery to grant permissions at AI-product velocity. On 15 March 2024, MeitY issued a revised Advisory that made two key changes:
- Permission requirement removed. The requirement of explicit Government of India permission before deployment of under-tested AI models was dropped. Under-tested models could now be deployed in India so long as users were "informed of the possible inherent fallibility or unreliability of the output generated" through labelling and consent mechanisms.
- Action-taken report removed. The 15-day status-report requirement was dropped.
What was retained — and now forms the operative core — is the following:
- Labelling obligation for under-tested or unreliable AI output.
- Consent-pop-up before user interaction.
- Deepfake and synthetic-content labelling and metadata requirement.
- Originator and modifier identification through permanent unique metadata.
- Bias / discrimination / electoral-integrity duty.
- Due-diligence obligations under the IT Rules 2021 (Rule 3(1)(a)-(b) due diligence; Rule 3(2) grievance redressal) remained in force regardless.
The net effect of the 15 March revision was to convert the Advisory from a permission-regime proxy into a labelling-and-traceability regime proxy — which is the shape that eventually became binding law under the IT Amendment Rules 2026 (G.S.R. 120(E)).
Who is bound
The Advisory is directed at the following categories of actor:
- Intermediaries under Section 2(1)(w) of the IT Act, 2000. This includes telecom service providers, internet service providers, hosting service providers, search engines, online payment sites, online auction sites, online marketplaces, cybercafés — and, by extension, any person who receives, stores or transmits electronic records on behalf of another.
- Significant social media intermediaries under Rule 2(1)(v) of the IT Rules 2021. Social media intermediaries with more than five million registered users in India (the threshold notified in the 2021 Gazette of India).
- AI model deployers / generative AI platforms. Any platform "making available" AI services to users in India — including foreign-incorporated generative AI providers offering LLM, image, voice or video generation APIs to Indian users. The Advisory expressly extends to "AI models / LLMs / Generative AI / software or algorithms".
- Down-stream deployers. Businesses that integrate an upstream LLM into a customer-facing product are themselves "intermediaries / platforms" for the purpose of the Advisory and bear the labelling and metadata obligations at their layer.
Penalties — loss of Section 79 safe harbour
The 1 March Advisory prescribes no direct monetary penalty. Its teeth are the teeth of the Information Technology Act, 2000 itself.
Section 79 of the IT Act provides intermediary safe harbour for third-party content, conditional on (a) the intermediary not initiating, selecting or modifying the content, (b) observing "due diligence" while discharging duties, and (c) not conspiring in or abetting unlawful acts. The IT Rules 2021 are the prescribed due-diligence framework. MeitY's advisories are executive guidance on what due diligence requires at any given time.
The Shreya Singhal actual-knowledge test. In Shreya Singhal v. Union of India, (2015) 5 SCC 1 ('Shreya Singhal'), the Supreme Court of India held that an intermediary's obligation to act on third-party content under Section 79 arises only on "actual knowledge" in the form of a court order or a notification by an authorised government agency that the information concerned is being used to commit an unlawful act. MeitY's advisories tighten the factual matrix around what amounts to "due diligence" — so that, after an advisory, an intermediary cannot credibly claim ignorance of Rule 3(1)(b)-type risks flowing from synthetic media.
Consequences of non-compliance.
- Loss of safe harbour under Section 79 IT Act — the intermediary becomes a potential primary obligor for user-generated unlawful content, with liability flowing under the Bharatiya Nyaya Sanhita, 2023 and sector-specific statutes.
- Court orders or MeitY takedown / blocking directions under Section 69A IT Act (read with the IT (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009) — these are independent of the Advisory but are typically triggered by the same facts.
- Post-DPDP consequences — from 13 May 2027, where synthetic media involves personal data of a Data Principal processed without lawful consent under Section 5 of the Digital Personal Data Protection Act, 2023, penalties of up to Rs. 250 crore apply under the Schedule to the DPDP Act.
The policy chain — from Advisory to Rule
The 1 March 2024 Advisory is best understood as a transition document between the IT Rules 2021 (as consolidated April 2023) and the binding synthetic-media regime that followed. The chain:
| Step | Date | Instrument | Effect |
|---|---|---|---|
| 1 | 26 Dec 2023 | PIB Advisory — compliance with Rule 3(1)(b) | Non-binding reminder of existing IT Rules 2021 obligations in the context of deepfakes. |
| 2 | 1 Mar 2024 | MeitY Advisory — AI permission + labelling | Escalated to require pre-deployment Government permission for under-tested AI; labelling and metadata mandates introduced. |
| 3 | 15 Mar 2024 | MeitY Revised Advisory | Permission requirement dropped; labelling / consent / metadata / traceability retained. |
| 4 | 22 Oct 2025 | MeitY Explanatory Note | Pre-notification rationale for the Synthetic Media Amendment. |
| 5 | 15 Nov 2025 | IT Rules 2021 Amendment (Rule 3(1)(d)) | Takedown timeline revised; operational prelude to Rule 3(3). |
| 6 | 10 Feb 2026 | IT Rules 2021 Synthetic Media Amendment (G.S.R. 120(E)) | Binding labelling, metadata and user-declaration mandate for synthetically generated information (see separate Veritect explainer). |
The Advisory's provisions are no longer standalone; they are absorbed into Rule 3(1)(d) and Rule 3(3) of the IT Rules 2021 as amended in February 2026.
Practitioner analysis
1. Advising AI start-ups — the Advisory still matters even after 15 March 2024
Client counsel sometimes treat the 1 March Advisory as "superseded" by the 15 March revision. That is a misreading. The 15 March Advisory retained labelling, consent and metadata obligations. Since both Advisories remain on MeitY's register as "current" (MeitY has not formally withdrawn either), and since the 2026 Amendment Rules now codify the labelling core, the relevant compliance baseline is the union of the retained 15 March obligations and the now-binding Rule 3(3). Client counsel should audit product flows for: consent pop-ups before LLM interaction; visible label on synthetic output; embedded metadata / C2PA-style provenance where feasible; originator-and-modifier log retention.
2. Model deployment checklists
A prudent deployment checklist for an AI intermediary in 2026 draws from the Advisory and codifies:
- Pre-launch: bias / discrimination / electoral-integrity impact assessment; model evaluation against Rule 3(1)(b) categories; reliability-testing report retained for regulator production.
- At launch: consent pop-up in English and at least one regional language; accessible labelling of synthetic output per Rule 3(3); metadata / provenance tag embedded at generation; takedown SLA configured to Rule 3(1)(d) three-hour clock.
- Post-launch: quarterly audit of originator-identification logs; incident-log review against Rule 3(1)(b) categories; board-level review of safe-harbour risk.
3. Audit trail obligations
MeitY's 1 March Advisory treated the metadata trail as a compliance artefact — not an engineering nicety. Practitioners drafting data-retention policies should ensure that:
- Creation metadata (user ID, time, prompt hash) is retained for at least the CERT-In-mandated 180-day log-retention window under Direction (v) of the CERT-In Directions dated 28 April 2022.
- Modification metadata is appended — not overwritten — so that first-originator and subsequent-modifier are both identifiable.
- Log-production protocols allow response to a Rule 3(1)(d) takedown order within three hours (post-Feb 2026), including the identity of the originator.
4. Cross-border providers — India exposure
Foreign generative AI providers serving Indian users are intermediaries for the purpose of the IT Act. The 1 March Advisory made this explicit by using "all Intermediaries or Platforms" language. A foreign LLM provider refusing to implement Indian labelling is not simply ignoring a foreign guideline — it is risking safe-harbour loss for its Indian subsidiary, its local reseller, and (by extension) any Indian enterprise customer operating on the same API. This is a material consideration in vendor-selection and contract-negotiation for regulated enterprises.
5. Interaction with DPDP — the personal-data angle
Where a generative AI product is fine-tuned on personal data, or produces content that identifies a natural person, the Digital Personal Data Protection Act, 2023 overlays a second regulatory layer. From 13 May 2027, fine-tuning on personal data without Section 5 DPDP Act consent, or synthetic replication of a Data Principal's likeness, attracts up to Rs. 250 crore under the Schedule to the DPDP Act — independent of Section 79 IT Act safe-harbour consequences.
Founder checklist
- This month — document your AI product's bias-and-reliability testing; maintain a dated evidence file you can produce to MeitY if queried.
- Next 30 days — implement a consent pop-up before first LLM interaction warning users of "possible inherent fallibility or unreliability"; keep the text archived in a compliance register.
- Before 20 February 2026 — ensure visible labels and embedded metadata on all synthetic output per Rule 3(3) of the IT Rules 2021 (as amended); align with Rule 3(1)(d) three-hour takedown SLA.
- Ongoing — retain originator, modifier and intermediary metadata for a minimum of 180 days aligning with CERT-In Direction (v) of the 28 April 2022 Directions.
- Before 13 May 2027 — audit your data-processing flows for Section 5 DPDP Act consent where the product touches personal data, and insert DPDP-compliant notices at the consent-pop-up layer.
Frequently asked questions
Is the 1 March 2024 MeitY AI Advisory still in force? The 1 March 2024 Advisory was functionally replaced by the revised MeitY Advisory dated 15 March 2024, which removed the "explicit permission of the Government of India" requirement for under-tested AI models but retained labelling, consent and traceability obligations on intermediaries and platforms. Both advisories rely on Rule 3(1)(b) of the IT Rules 2021 and Section 79 of the IT Act, 2000 as their enforcement hooks. Neither is a gazette-notified rule — they are executive advisories that shape the safe-harbour due-diligence inquiry.
Does the Advisory bind foreign AI model providers? Yes, to the extent they qualify as intermediaries or platforms "making available" AI services to Indian users under Section 2(1)(w) of the IT Act, 2000. MeitY's 1 March 2024 Advisory is expressly directed at "all Intermediaries or Platforms", which covers foreign generative AI providers offering services to users in India. The Advisory's force flows from the conditional nature of Section 79 IT Act safe harbour — non-compliance risks loss of immunity for user-generated content.
What is the legal basis for MeitY issuing an AI Advisory rather than a rule? MeitY relies on Section 79(2)(c) of the IT Act, 2000 — which conditions intermediary safe harbour on observance of "due diligence" — read with the IT Rules 2021. Executive advisories do not create new law; they indicate what MeitY considers necessary due diligence. Courts apply the Shreya Singhal v. Union of India, (2015) 5 SCC 1 "actual knowledge" test to determine when intermediary immunity is lost, and advisories inform the factual matrix on which that question turns.
What penalty applies if an AI platform ignores the MeitY Advisory? There is no direct monetary penalty prescribed by the Advisory itself. The real exposure is loss of Section 79 IT Act safe harbour, which converts the intermediary into a potential primary obligor for user-generated unlawful content. Additional consequences include breach-of-contract claims from users, regulatory action under Rule 7 of the IT Rules 2021, and — post-DPDP — penalties under the Schedule to the Digital Personal Data Protection Act, 2023 if personal data is processed without lawful consent.
How does this Advisory connect to the Feb 2026 Synthetic Media Amendment? The 1 March 2024 Advisory was the policy bridge between the 26 December 2023 PIB advisory on deepfakes and the formally notified Rule 3(1)(d) amendment (15 November 2025) and the Synthetic Media Amendment Rules, 2026 (G.S.R. 120(E), notified 10 February 2026, effective 20 February 2026). The labelling and traceability concepts pioneered in the Advisory now appear in binding form under Rule 3(1)(d) and Rule 3(3) of the IT Rules 2021, as amended.
Sources
- MeitY — Press Releases and Advisories
- PIB — MeitY Advisory to Intermediaries on Misinformation and Deepfakes (26 December 2023)
- IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 — consolidated April 2023 (MeitY PDF)
- Information Technology Act, 2000 — India Code
- MeitY Explanatory Note on Synthetic Media (22 October 2025)
- MeitY FAQ on Synthetic Media Amendment (October 2025)
This explainer is part of Veritect's Digital, Data & AI Law vertical. It is an original analysis prepared from Tier 1 government and regulator sources and does not reproduce or paraphrase any third-party commentary. For verification, consult the MeitY news register, the PIB release PRID 1990542, and the Gazette of India notification G.S.R. 120(E) dated 10 February 2026.