Measures in Place to Detect Fake News, Hate Speech and Objectionable Content: Facebook to Delhi HC
Measures in Place to Detect Fake News, Hate Speech and Objectionable Content: Facebook to Delhi HC
Facebook has submitted before the high court that it cannot remove any allegedly illegal group from its platform as removal of such accounts or blocking access to them came under the purview of the discretionary powers of the government according to the IT Act.

Online social media platform Facebook has claimed in the Delhi High Court that it has put in place measures like community standards, third party fact checkers, reporting tools and artificial intelligence to detect and prevent the spread of inappropriate or objectionable content like hate speech and fake news.

Facebook, however, has submitted before the high court that it cannot remove any allegedly illegal group, like the bois locker room, from its platform as removal of such accounts or blocking access to them came under the purview of the discretionary powers of the government according to the Information Technology (IT) Act.

It has contended that any "blanket" direction to social media platforms to remove such allegedly illegal groups would amount to interfering with the discretionary powers of the government.

It further said directing social media platforms to block "illegal groups" would require such companies, like Facebook, to first "determine whether a group is illegal, which necessarily requires a judicial determination, and also compels them to monitor and adjudicate the legality of every piece of content on their platforms".

Facebook has contended that the Supreme Court has held that an intermediary, like itself, may be compelled to block content only upon receipt of a court order or a direction issued under the IT Act.

The submissions were made in an affidavit filed in court in response to a PIL by former RSS idealogue K N Govindacharya seeking directions to the Centre, Google, Facebook and Twitter to ensure removal of fake news and hate speech circulated on the three social media and online platforms as well as disclosure of their designated officers in India.

Facebook has also replied to Govindacharya's application, filed through advocate Virag Gupta, seeking removal of illegal groups like bois locker room from social media platforms for the safety and security of children in cyberspace.

On the issue of hate speeches, fake news and fake accounts on its platform, which was raised in the PIL, Facebook has contended that it has robust "community standards" and guidelines which make it clear that any content which amounts to hate speech or glorifies violence can be removed by it.

It has further claimed that it provides easy to locate and use reporting tools to report objectionable content including hate speech.

It has said it relies upon a combination of technology and people to enforce its community standards and to keep its platform safe, i.e., by reviewing reported content and taking action against content which violates its guidelines

"Facebook uses technological methods including artificial intelligence (AI) to detect objectionable content on its platform, such as terrorist videos and hate speech. Specifically, for hate speech Facebook detects content in certain languages such as English and Portuguese that might violate its policies. Its teams then review the content to ensure only non-violating content remains on the Facebook service.

"Facebook continually invests in technology to increase detection accuracy across new languages. For example, Facebook AI Research (FAIR) is working on an area called multilingual embeddings as a potential way to address the language challenge," it has claimed.

It has also claimed that its community standards have been developed in consultation with various stakeholders in India and around the world, including 400 safety experts and NGOs that are specialists in the area of combating child sexual exploitation and aiding its victims.

Facebook has also said that "it does not remove false news from its platform, since it recognises that there is a fine line between false news and satire/opinion. However, it significantly reduces the distribution of this content by showing it lower in the news feed".

Facebook has claimed that it has a three-pronged strategy -- remove, reduce and inform -- to prevent misinformation from spreading on its platform.

Under this strategy it removes content which violates its standards, including fake accounts, which are a major distributor of misinformation, it has said. It claimed that between January-September 2019, it removed 5.4 billion fake accounts, and blocks millions more at registration every day.

It also reduces the distribution of false news, when it is marked as false by Facebook's third party fact checking partners, and also informs and educates the public on how to recognize false news and which sources to trust.

Facebook has also claimed that it is "building, testing and iterating on new products to identify and limit the spread of false news".

It has also emphasised that "it is an intermediary, and does not initiate transmissions, select the receiver of any transmissions, and/or select or modify the information contained in any transmissions of third-party accounts".

In its affidavit it has also denied that it has been sharing users' data with American intelligence agencies.

On the issue of disclosing identities of designated officers in India, Facebook, like Google, has contended that there is no legal duty on it to formally notify details of such officials or to take immediate action through them for removal of fake news and hate speech.

It has said that the rules under the IT Act make it clear that designated personnel of intermediaries (such as Facebook) are only required to address valid blocking orders issued by a court and valid directions issued by an authorized government agency.

What's your reaction?

Comments

https://terka.info/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!