Select Language:
India’s government announced that social media platforms will be required to remove illegal content within three hours of receiving a notification, shortening the previous 36-hour window. This update, effective starting February 20, 2024, alters India’s 2021 IT rules, which have been a contentious issue between Prime Minister Narendra Modi’s administration and leading tech companies.
The move underscores India’s stance as one of the strictest regulators of online content, forcing platforms to navigate compliance for a market with over a billion internet users amid escalating concerns over censorship. No official reason was provided for the tightening of the takedown timeline.
“Removing content in just three hours is nearly impossible for social media companies,” said Akash Karmakar, a partner at Indian law firm Panag & Babu, specializing in technology law. “It ignores practical challenges and the need for careful review before action.”
In recent years, India has increased its authority over online speech by empowering numerous officials to order content removal, often drawing criticism from digital rights activists and leading to conflicts with companies like Elon Musk’s X.
Meta, the owner of Facebook, declined to comment on the new rules, while X and Google, which owns YouTube, did not respond immediately.
Globally, there is mounting pressure for social media platforms to act more swiftly and be more transparent about content moderation, with governments from the European Union to Brazil demanding faster takedown procedures.
India’s IT regulations give the government the power to demand removal of content considered illegal under laws concerning national security and public order. According to transparency reports, the country has issued thousands of such orders; Meta, for instance, reported removing over 28,000 posts in the first half of 2025 in response to government requests.
A social media executive, speaking anonymously, noted that the regulation was implemented without consultation, and international standards typically allow longer response times.
The amended regulations also dropped an earlier requirement for platforms to label AI-generated content on 10% of their total, instead mandating the content be “prominently labeled.”




