The US President’s crusade to tackle what he perceives as biased political censorship by social media platforms is gathering momentum.
The US Department of Justice, acting on behalf of the Trump administration, has sent draft legislation to Congress to reform Section 230 of the Communications Decency Act. According to Reuters’ good summary of the situation, only Congress, which is currently majority Democrat, can approve the reforms. Given the polarised nature of US politics, that means it’s unlikely to go through, but it will at least force a public debate on the matter.
Trump, along with many prominent US ‘conservatives’ feels that the way social media platforms such as Facebook, Twitter and YouTube moderate the content they host is biased against him and his supporters. Since social media has become the main ground on which political and electoral battles are fought, that makes social media censorship a matter of considerable public interest.
“For too long Section 230 has provided a shield for online platforms to operate with impunity,” said Attorney General William Barr. “Ensuring that the internet is a safe, but also vibrant, open and competitive environment is vitally important to America. We therefore urge Congress to make these necessary reforms to Section 230 and begin to hold online platforms accountable both when they unlawfully censor speech and when they knowingly facilitate criminal activity online.”
“The Department’s proposal is an important step in reforming Section 230 to further its original goal: providing liability protection to encourage good behaviour online,” said Deputy Attorney General Jeffrey Rosen. “The proposal makes clear that, when interactive computer services willfully distribute illegal material or moderate content in bad faith, Section 230 should not shield them from the consequences of their actions.”
Section 230 was cobbled together in haste after cases like Stratton Oakmont (yes, that one) vs Prodigy, in which an internet platform was sued for defamation due to an allegation made against it by an anonymous poster. But as internet platforms have become increasingly central to all public discourse, they have also become more censorious. Section 230 reform is, at its core, an attempt to give a few private tech companies less control over the public square, while still allowing them to remove genuinely harmful and illegal content.
The DoJ has highlighted four areas in need of reform, the fourth of which addresses the matter of selective censorship, so it’s worth reproducing in full (our emphasis).
Promoting Open Discourse and Greater Transparency
A fourth category of potential reforms is intended to clarify the text and original purpose of the statute in order to promote free and open discourse online and encourage greater transparency between platforms and users.
- Replace Vague Terminology in (c)(2).
First, the Department supports replacing the vague catch-all “otherwise objectionable” language in Section 230(c)(2) with “unlawful” and “promotes terrorism.” This reform would focus the broad blanket immunity for content moderation decisions on the core objective of Section 230—to reduce online content harmful to children—while limiting a platform’s ability to remove content arbitrarily or in ways inconsistent with its terms or service simply by deeming it “objectionable.”
- Provide Definition of Good Faith.
Second, the Department proposes adding a statutory definition of “good faith,” which would limit immunity for content moderation decisions to those done in accordance with plain and particular terms of service and accompanied by a reasonable explanation, unless such notice would impede law enforcement or risk imminent harm to others. Clarifying the meaning of “good faith” should encourage platforms to be more transparent and accountable to their users, rather than hide behind blanket Section 230 protections.
- Explicitly Overrule Stratton Oakmont to Avoid Moderator’s Dilemma.
Third, the Department proposes clarifying that a platform’s removal of content pursuant to Section 230(c)(2) or consistent with its terms of service does not, on its own, render the platform a publisher or speaker for all other content on its service.
Essentially the proposed reforms seek to take the subjectivity out of social media censorship by tightening the language used to define the special status afforded to internet platforms. It has been clear to us for some time that the only solution to biased social media censorship is to make Section 230 protections contingent on the platforms only censoring illegal content, as opposed to stuff they just don’t like the look of. These proposed reforms seem like a step in the right direction.