The notification of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, the Government’s first move to regulate social media platforms, is, essentially, focused on regulating Big Tech through the prism of tackling law and order.
This is an objective broadly aligned to interventions in countries such as Singapore, Russia and Germany. This is distinct from regulations in, say the European Union, where the individual’s privacy is at the centre of policy.
These rules come even as a Joint Parliamentary Committee is in the final stages of discussion of a privacy Bill that covers aspects such as the role of social media intermediaries and the reach of Big Tech companies, and the privacy accorded to the users on such platforms.
Asked why the government had chosen to come out with the new guidelines as the larger umbrella law on data protection was pending, a senior official said the rules offered an “overarching architecture” for companies to follow while the law would still have to be passed by Parliament.
“There should not be double standards. If an attack is there at Capitol Hill (US Congress), then social media supports police action. But if there is an aggressive attack at Red Fort, the symbol of India’s freedom where the Prime Minister hoists the national flag, you have double standards. This is plainly unacceptable,” Union Law & IT Minister Ravi Shankar Prasad said Thursday while announcing the new rules.
Incidentally, Prasad had flagged these issues in Parliament earlier this month where he first announced the government’s renewed intent to issue these norms. Following the Red Fort incident, the government had asked US microblogging site Twitter to block certain accounts and Twitter did not immediately respond in a manner favourable to authorities.
The new rules lay down a sweeping swathe of 10 broad categories of content that platforms are prohibited from hosting. These include: content that “threatens the unity, integrity, defence, security or sovereignty of India, friendly relations with foreign States, or public order, or causes incitement to the commission of any cognizable offence or prevents investigation of any offence or is insulting any foreign States”; “is defamatory, obscene, pornographic, paedophilic, invasive of another’s privacy, including bodily privacy, insulting or harassing on the basis of gender, libellous, racially or ethnically objectionable, relating or encouraging money laundering or gambling, or otherwise inconsistent with or contrary to the laws of India”, and others.
Guidelines for Social Media Platforms
A Grievance redressal mechanism should be developed and there should be a Grievance Redressal Officer
— PIB India (@PIB_India) February 25, 2021
Another key issue brought up in Thursday’s guidelines is that of traceability on messaging platforms like WhatsApp. The government said that for purposes of “prevention, detection, investigation, prosecution or punishment of an offence related to the sovereignty and integrity of India, the security of the State, friendly relations with foreign States, or public order, or of incitement to an offence relating to the above or in relation with rape, sexually explicit material or child sexual abuse material, punishable with imprisonment for a term of not less than five years”, a messaging platform shall enable identification of the “first originator” of an unlawful message.
For many years now, the government has been asking Facebook-owned WhatsApp to allow traceability after there were cases of deaths due to mob-lynching in which messages were allegedly spread to mobilise the crowd. WhatsApp has denied these requests.
Globally, too, other jurisdictions such as the US, the UK, and Australia have written to Facebook indicating that its plan to extend end-to-end encryption on its messaging platforms should not exclude a means for lawful access to the content of communications to protect citizens.
Lawmakers in Brazil had also proposed a legislation to force companies to add a permanent identity stamp to the private messages people send, which WhatsApp said would result in platforms having to “trace who-said-what and who-shared-what for billions of messages sent every day”.
Furthermore, several jurisdictions globally, including Singapore, Australia, Germany and Russia, have prescribed laws to prevent unlawful content from being published on social media.
For example, in October 2019, Singapore passed the Protection from Online Falsehoods and Manipulation Bill, which gave powers to the government to order online platforms to remove and correct what it deems to be false statements that are “against the public interest”. This was backed by severe penal provisions carrying fines of up to 1 million Singapore dollars and a 10-year jail term.
While Singapore’s government brought in the law ostensibly to curb fake news on social media, it drew criticism from civil society for using the law to “silence critics and opponents ahead of elections”.
Australia’s “Sharing of Abhorrent Violent Material Act” in 2019 was also brought in following the terror attack in New Zealand where a shooting was live-streamed on Facebook.
Australia introduced criminal penalties for social media companies, with possible jail sentences for their executives for up to three years and financial penalties worth up to 10 per cent of a company’s global turnover.
Similarly, in early 2018, Germany passed a law applicable to social media companies with more than 2 million registered users in the country. Under the law, the companies were forced to set up procedures to review complaints about content they were hosting, remove anything that was clearly illegal within 24 hours and publish compliance updates every six months.