Meta Hit with Lawsuits Over Allegations of Promoting Hate Speech | Robinson+Cole Data Privacy + Security Insider - JDSupra

2021-12-25 06:02:32 By : Ms. International Trade Dep.

Meta has been hit with two related lawsuits totaling over $150 billion in its first major legal challenge since rebranding. The suits (one filed in California Superior Court and the other in the UK) come from a class representing the Rohingya, a minority Muslim population that has suffered severe systematic violence in Myanmar. The suits allege that the Facebook algorithm promoted hate speech aimed at the Rohingya. The complaint claims that Facebook actively steers users towards extremist groups and contributes to the worsening Rohingya situation. The suit further alleges that high-level Meta executives knew of anti-Rohingya hate speech on the platform and allowed it to remain.

The suit comes at a time when Meta is facing increased calls for regulation from both sides of the political aisle. Many commentators also blame Facebook for contributing to worsening political divides in the United States. The elephant in the room is Section 230 of the Communication Decency Act, which immunizes online platforms from legal repercussions based on the content posted by their users. This lawsuit seems poised to tug at that thread in what may be the start of a new era for online moderation – does the law shield companies that promote hate speech, or does that safe harbor stop at hosting? While Section 230 protects companies that host hate speech, it is silent on the question of promotion and circulation.

The allegation that Meta actively and knowingly contributes to polarization and the expansion of hate groups cuts to the core of social media as an industry. Social media giants make money by keeping users engaged with their platforms, which allows them to both serve ads and gather user profile information to sell to advertisers. Unfortunately, nothing gets users to engage like righteous anger, so content serving algorithms learn to favor inflammatory content that confirms users’ existing biases. This phenomenon, called a “filter bubble,” tends to promote echo chambers and ideological extremism within different pockets of a platform’s userbase. If the court in this case finds that Meta breached a duty by promoting hate speech, it has the potential to upend the industry’s traditional business model.

*This post was authored by C. Blair Robinson, legal intern at Robinson+Cole. Blair is not yet admitted to practice law.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Robinson+Cole Data Privacy + Security Insider | Attorney Advertising

This website uses cookies to improve user experience, track anonymous site usage, store authorization tokens and permit sharing on social media networks. By continuing to browse this website you accept the use of cookies. Click here to read more about how we use cookies.

Copyright © JD Supra, LLC