Centre prescribes labels for all photorealistic AI content online
Why in the News ?
India has notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, mandating prominent labelling of photorealistic AI-generated content and imposing strict takedown timelines for illegal and sensitive digital material. The amendments come into force on February 20, signalling a major regulatory push to tackle deepfakes, AI misinformation, and online harm.

Background
- India’s digital ecosystem has expanded rapidly, with over 900 million internet users and a growing reliance on social media for news and communication.
- The original Information Technology Act, 2000 and subsequent 2021 IT Rules introduced a framework for intermediary liability, safe harbour, and due diligence obligations.
- Advances in generative AI have enabled highly realistic deepfakes, voice clones, and synthetic videos.
- Globally, governments are struggling to regulate AI-generated content without undermining free speech and innovation.
India has witnessed multiple instances of:
- Deepfake political content
- Non-consensual intimate imagery
- AI-enabled misinformation campaigns
- Fraud using synthetic voices/videos
The new amendments respond to these risks by strengthening accountability mechanisms for digital intermediaries.
Features
Mandatory Labelling of AI-Generated Content
- Platforms must ensure prominent disclosure of photorealistic AI-generated media.
- Users must declare if the content is AI-generated.
If users fail to disclose:
- Platforms must label it proactively, or
- Remove it if it involves harmful deepfakes.
The rules define synthetically generated content as media created or altered using computer resources in a way that appears indistinguishable from real persons or events.
Strict Takedown Timelines
- Illegal content flagged by a court/government → 3-hour removal
- Sensitive content (non-consensual nudity/deepfakes) → 2-hour removal
- Earlier timelines were 24–36 hours, making this a drastic tightening of compliance requirements.
Safe Harbour Conditionality
Failure to comply may result in loss of safe harbour protection, meaning platforms could be treated like publishers and held legally liable for user content.
Administrative Flexibility for States
- States may designate multiple officers to issue takedown orders.
- This reverses an earlier limit of one officer per state.
Narrower Definition than Draft Rules
- The final definition of synthetic media is more precise than the October 2025 draft, indicating industry pushback and regulatory balancing.
Challenges
Free Speech Concerns
- Risk of over-censorship
- Government-directed takedowns could be misused
- Chilling effect on satire, parody, and political speech
Technical Feasibility
- Detecting AI content in real time is difficult
- False positives may remove legitimate content
- Smaller platforms lack AI moderation infrastructure
Compliance Burden
- 2–3 hour deadlines are extremely tight
- Global platforms may struggle with India-specific timelines
- Operational costs will increase
Jurisdictional and Federal Tensions
- Multiple state officers issuing takedown orders may create:
- Conflicting directives
- Regulatory fragmentation
Innovation vs Regulation
- Startups may fear legal risk
- Excessive compliance could slow India’s AI ecosystem
Way Forward
Clear Operational Guidelines
- Standard protocols for identifying synthetic media
- Appeals mechanism for wrongful takedowns
- Transparent reporting
Independent Oversight
- Judicial or quasi-judicial review of takedown orders
- Safeguards against executive overreach
Platform–Government Collaboration
- Shared AI detection tools
- Industry standards for watermarking
Public Awareness
- Digital literacy campaigns
- User education on identifying deepfakes
Proportional Enforcement
- Tiered compliance expectations for small platforms
- Incentives for voluntary compliance
Conclusion
The amendments represent India’s most assertive attempt yet to regulate AI-generated misinformation and deepfakes. While the rules strengthen user protection and platform accountability, their success will depend on balancing innovation, free expression, and digital safety. A rights-respecting enforcement framework, transparent processes, and collaborative governance will be essential to ensure that regulation curbs harm without undermining democratic discourse.







