Op-ed

Online Platforms are Making Up Free Speech Rules as They Go—and It’s Concerning

| Written By:

Banning users from social media platforms raises concerns about free speech protections online

Shutterstock

The dramatic events of Jan. 6, 2021, represent not only a severe crisis for American democracy—the first takeover of the U.S. Capitol building since 1812—but also a possible tipping point in the power relationship between the U.S. government and the major online platforms.

Up until recently, the U.S. government was considering whether to subject the platforms to stricter regulation—for example, by removing their immunity from claims pursuant to Section 230 of the Communications Decency Act—because of a widespread perception of failure to moderate harmful online content.

However, the decision by the largest online platforms to deplatform President Trump in the aftermath of the failed insurrection attempt marks an odd reversal of roles: The platforms have taken drastic steps to protect the democratic institutions of the U.S. government in lieu of the government itself, which failed to adequately protect the Capitol.

Almost in parallel to the deplatforming of the President by Facebook and Twitter, tech giants Apple, Google, and Amazon withdrew the right-wing social network Parler from their app stores or web-hosting services.

These developments stand in marked contrast to the traditional hands-off policies long held by online platforms, which for many years resisted calls for moderating false content, citing concerns about becoming arbiters of the truth. They displayed particular deference toward interfering with online content posted by world leaders—alluding to the public interest in obtaining the information they posted.

The quick shift from refusing to serve as speech police to permanently blocking the President in response to the unprecedented postelection events raises, nonetheless, questions about the compatibility of the platforms’ new policies and practices with core freedom of expression principles, and the adequacy of the regulatory framework governing online platforms.

The need to address such questions was implicitly acknowledged by Twitter CEO Jack Dorsey, who alluded to the measures taken by Twitter vis-à-vis the President’s account as a “dangerous” precedent.

As private entities, online platforms are not subject to constitutional limits on speech regulation under the First Amendment, but rather operate under a contractual framework established by terms-of-service agreements and community standards or rules.

Still, given the significant market share held by a few online platforms and the dominance of online speech in the marketplace of ideas and in political discourse, they have become critical gatekeepers in the world of information.

And while most still think of freedom of expression as a right exercised by individuals vis-à-vis governments, in the digital space the platforms operate as de facto governments, exercising lawmaking, law interpretation, and law enforcement functions, including—apparently—the power to permanently banish individuals, groups, and businesses, an outcome that might be regarded as the virtual equivalent of exile for life.

Lack of certainty around platforms’ policies and the processes of their interpretation and application also raises significant concerns about potential abuse of power, and about generating a chilling effect on political speech.

Now that online platforms are evolving from republishers of third-party information—who may choose to exercise some editorial control over such content (protected by Section 230’s “good Samaritan” clause, authorizing the removal in good faith of objectionable content)—to gatekeepers of democracy and defenders of the public interest, legal and political checks over their newly exercised power need to be reevaluated.

Three sets of questions should compose part of a review of platforms’ free speech policies, which can be undertaken by relevant stakeholders, including democratic government institutions, international bodies, academia, the media, and public interest groups:

Are community standards on political speech sufficiently detailed and precise in explaining what constitutes prohibited speech and what the consequences are for violating the said standards? In order to retain broad legitimacy, such standards ought to be compatible with recognized standards, such as those found in rule-of-law democracies or international human rights law.

Is decision-making regarding content moderation in political speech cases sufficiently transparent in terms of the process undertaken and the reasons given for specific speech-limiting decisions?

Do individuals and groups affected by content moderation decisions have an effective avenue of recourse to challenge such decisions?

Ensuring that online platforms integrate good governance, human rights, and rule-of-law safeguards in their community standards and actual operations may help mitigate the “dangerous” precedent of deplatforming President Trump.

The article was published in Fortune