Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 – Regulation of AI-Generated and Synthetic Content
Background and Regulatory Context
On 10 February 2026, the Ministry of Electronics and Information Technology notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (“Amendment Rules”) under the Information Technology Act, 2000 (“IT Act”).
The Amendment Rules introduce a dedicated compliance framework governing artificial intelligence-generated and synthetically generated audio, visual and audio-visual content, including deepfakes and algorithmically altered media. These Rules establish clear responsibilities for online platforms regarding content created or modified with AI tools, such as deepfakes or digitally manipulated videos, images or audio.
The framework expands intermediary due diligence obligations, mandates labelling and traceability of synthetic content, and substantially shortens timelines for takedown and grievance redressal. Platforms must take greater care in monitoring such content, clearly label AI-made content, keep records showing where it came from, and remove harmful material much faster than before. The Amendment Rules will come into force on 20 February 2026.
The notification reflects the Government’s policy objective of balancing innovation in AI technologies with safeguards against impersonation, misinformation, non-consensual imagery, and other unlawful or deceptive uses of synthetic media.
Objectives of the Amendment Rules
The amendments seek to:
- Formally recognise AI-generated and synthetic media within the regulatory perimeter of the IT Rules;
- Enhance intermediary accountability for hosting and dissemination of such content;
- Introduce transparency through labelling and provenance requirements;
- Strengthen user awareness and grievance mechanisms; and
- Enable faster regulatory and law enforcement intervention in cases of harmful or unlawful content.
Overall, the aim is to make platforms more responsible, ensure users know when content is AI-generated, and allow quicker action against harmful or illegal material.
Scope and Applicability
The Amendment Rules apply broadly to intermediaries, including social media platforms, hosting providers, AI-enabled tools and services that enable creation, modification, publication or dissemination of digital content. This means the Rules apply to most online platforms and services that allow users to create, edit, upload, or share digital content, including AI-based tools.
Enhanced obligations are specifically prescribed for Significant Social Media Intermediaries(“SSMIs”), given their scale, reach and potential systemic impact. Larger platforms with a high number of users have stricter responsibilities because their content can affect more people. The framework extends to the entire lifecycle of synthetic content, including its creation, publication, hosting, sharing, moderation and removal.
Definition Clauses
The definitions of “audio, visual or audio-visual information” and “synthetically generated information” were not part of the original Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. These definitions have been recently inserted through the 2026 Amendment Rules by the Ministry of Electronics and Information Technology
Rule 2(1)(ca) – “Audio, visual or audio-visual information”
Any audio, image, photograph, graphic, video, moving visual recording, sound recording or other audio-visual content, whether created, generated, modified or altered through a computer resource. This includes all types of digital media, such as photos, videos, graphics or sound files created or edited using a computer or software.
Rule 2(1)(wa) – “Synthetically generated information”
Audio, visual or audio-visual information artificially or algorithmically created or altered using a computer resource so as to appear real or authentic and indistinguishable from an actual person or event. This refers to content made or changed using AI or software that looks real, even though it may be fake or digitally created (for example, deepfakes).
Routine or good-faith technical edits and enhancements are excluded.
Key Provisions & Regulatory Changes
- User Information and Platform Policies – Intermediaries are required to periodically inform users, at least once every three months, of platform rules, consequences of non-compliance and potential legal liabilities. Users must be made aware that violations may result in the removal of content, suspension or termination of accounts and reporting to appropriate authorities where mandated by law.
- Due Diligence for Synthetic Content Platforms that allow AI-generated or edited content must use tools and technology to detect and block illegal or harmful material. This includes fake, misleading, exploitative, or fraudulent content. They must actively monitor such content and remove it quickly when found.
- Labelling and Traceability – AI-generated content must be clearly marked so users know it is not real. Platforms must also attach technical identifiers that help trace where the content came from. These labels or tracking details cannot be removed or hidden.
- Obligations for Significant Social Media Intermediaries – Large platforms have extra responsibilities. They must ask users to disclose if their content is AI-generated, verify this information, and clearly label such content before it is shared. Failure to follow these steps may result in loss of legal protection.
- Revised Compliance Timelines – The Amendment Rules significantly reduce response periods for regulatory compliance. Directions from the government or authorised authorities for the removal or disabling of access must be acted upon within three hours. Grievance handling and complaint resolution timelines have also been shortened, reflecting an expectation of swift and proactive action by intermediaries
Implementation & Key Takeaways
The Amendment Rules mark a substantive shift in India’s digital governance framework and represent one of the first comprehensive statutory regimes specifically addressing AI-generated and deepfake content.
Intermediaries and technology platforms should utilise the implementation window to review and update their content moderation practices, user disclosures, technical safeguards and internal governance frameworks. Adoption of labelling systems, provenance tools, detection technologies and expedited grievance mechanisms will be critical to maintaining compliance and preserving safe harbour protections.
Organisations deploying or integrating AI-driven content tools should treat synthetic media compliance as a board-level governance and risk management priority rather than solely a technology function. Early alignment with the amended framework will assist in mitigating regulatory exposure and ensuring operational readiness once the Rules take effect.
