“Section 230” refers to Section 230 of the 1996 Federal Communications Decency Act, relevant portions of which are reproduced at the bottom of this post. As an article at The American Spectator explains, Section 230
protects online platforms from legal liability for the comments, posts, and videos that users share on social media. Currently, one may sue the person who posts inflammatory or defamatory content but not the companies that own the platforms. Without Section 230, Google, Facebook, and YouTube would face an endless sea of litigation.
What’s not to like about that? Well, the same article offers some dire predictions:
If the legal code treats social media platforms like traditional publishers, then they would face a choice: they could either strictly police content or stop policing it at all. Social media users would find two types of resulting platforms: a) those that are highly moderated and would, of course, anger virtually everyone (and conservatives especially), and b) those that would quickly resemble one’s spam file or an open sewer.
Are there other ways of regulating internet platforms to deter censorship of conservative points of view, either directly (through algorithms and human intervention) or by using excuses such as “disinformation” (i.e., facts and opinions that the operators of platforms don’t like)?
Let’s begin with the key language of Section 230:
No provider or user of an interactive computer service shall be held liable on account of….
any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected….
The phrases are “otherwise objectionable” and “whether or not such material is constitutionally protected” give platform operators carte-blanche to censor anything. And given the “liberal” bent of most platform operators, they will censor almost anything that seems to pose a serious threat to leftist dogmas and projects, the First Amendment to the contrary notwithstanding.
Why is the First Amendment relevant? Doesn’t it apply only to government? No, it also applies to private actors who are in fact state actors doing the bidding of the state with its encouragement and acquiescence. This is from a piece by Vivek Ramaswamy and Jed Rubenfeld in The Wall Street Journal (“Save the Constitution from Big Tech“, January 11, 2021):
Conventional wisdom holds that technology companies are free to regulate content because they are private, and the First Amendment protects only against government censorship. That view is wrong: Google, Facebook and Twitter should be treated as state actors under existing legal doctrines. Using a combination of statutory inducements and regulatory threats, Congress has co-opted Silicon Valley to do through the back door what government cannot directly accomplish under the Constitution.
It is “axiomatic,” the Supreme Court held in Norwood v. Harrison (1973), that the government “may not induce, encourage or promote private persons to accomplish what it is constitutionally forbidden to accomplish.” That’s what Congress did by enacting Section 230 of the 1996 Communications Decency Act, which not only permits tech companies to censor constitutionally protected speech but immunizes them from liability if they do so….
Section 230 is the carrot, and there’s also a stick: Congressional Democrats have repeatedly made explicit threats to social-media giants if they failed to censor speech those lawmakers disfavored [emphasis and link added]. In April 2019, Louisiana Rep. Cedric Richmond warned Facebook and Google that they had “better” restrict what he and his colleagues saw as harmful content or face regulation: “We’re going to make it swift, we’re going to make it strong, and we’re going to hold them very accountable.” New York Rep. Jerrold Nadler added: “Let’s see what happens by just pressuring them.”
Such threats have worked. In September 2019, the day before another congressional grilling was to begin, Facebook announced important new restrictions on “hate speech.” It’s no accident that big tech took its most aggressive steps against Mr. Trump just as Democrats were poised to take control of the White House and Senate. Prominent Democrats promptly voiced approval of big tech’s actions, which Connecticut Sen. Richard Blumenthal expressly attributed to “a shift in the political winds.”
There are idiots in the so-called libertarian legal community who still defend Big Tech’s right to censor conservatives because Big Tech is “private”. Power is power, and the nation is under the thumb of a power elite, of which Big Tech is a leading-edge component.
Moreover, as the “Twitter files” exposé demonstrates vividly, Big Tech is sometimes nothing more than a puppet whose strings are pulled by agencies of the federal government (CIA, FBI, and Department of Justice in particular). It is not a coincidence that the string-pulling is intended to subvert candidates, facts, and opinions opposed to the left’s politicians and policies.
Section 230, as it stands, gives Big Tech (and its satellites) a license to censor on behalf of the state. The language of Norwood v. Harrison quoted above, though seemingly dispositive, is drawn from a district-court opinion in a civil-rights case from the 1960s. It is imperative, therefore, that a case be brought to the Supreme Court that directly challenges censorship conducted by Big Tech (and its satellites) behind the cloak of Section 230. (Two recently argued cases involving Big Tech are unlikely to go to the heart of the problem: platform operators as state actors.)
Absent a definitive Supreme Court ruling that bars censorship by state actors, Section 230 can be reformed only when (and if) Congress and the White House are controlled by Republicans. The Catch-22 is that absent a substantive revision of Section 230, Big Tech will still hold censorship power that can thwart the GOP’s efforts to regain control of the federal government.
In any event, the fix is relatively simple (in my opinion). The language of Section 230 should be changed as follows, with deletions indicated by strikethroughs and additions indicated by boldface:
(c) Protection for “Good Samaritan” blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an
interactive computer serviceinternet platform shall be treated as the publisher or speaker of any information provided by another information content provider.(2) Civil liability
No provider or user of an i
nteractive computer serviceinternet platform shall be held liable on account of—(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or
otherwise objectionabledefamatory of a non-public person or persons, whether or not such material is constitutionally protected; or(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1); except that
(C) a provider or user of an internet platform may be held liable for any action taken to restrict access to or availability of material for any reason other than those listed in paragraph (2)(A); and
(D) a provider or user of an internet platform may be held liable for denying or restricting the technical means of accessing the material described in paragraph (1) unless the means are denied or restricted for a reason listed in paragraph (2)(A) or for non-payment of a fee that is applicable to all similarly situated providers or users.
The substitution of “internet platform” for “interactive computer service” is intended to reflect the significant changes in the composition of internet entities during the past 27 years. The definition would be changed accordingly:
The term
“interactive computer service”“internet platform” means anyinformationservice, system, or access softwareproviderthat provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions. “Internet platform” also means any website or means of communication which is accessed through the Internet and which provides or enables communication among users, the sale of products and services, or the exchange and dispensation of news, facts, opinions, or other expressions of ideas and points of view.
The revisions are aimed at the four types of platform that are at the center of this controversy, plus a fifth type that provides the means by which the other four operate:
True social-media platforms (e.g., Facebook, Twitter, and Instagram), which afford users a way in which to communicate photos, videos, personal notes, family news, notifications of upcoming events, opinions, etc., with other users.
Commercial platforms (e.g., Amazon and Yelp), which offer products and services for sale and, crucially, also provide venues in which users of products and services may rate and review them.
Providers of information (e.g., Google, Wikipedia, and the internet arms of traditional media companies like The New York Times and NBC), which consists either of “hits” on internet sources deemed relevant to a user’s query, user-generated and user-edited articles purporting to provide authoritative information on a wide range of subjects, or content that is open to comment by subscribers and/or the general public.
Platforms that host blogs (e.g., Wordpress and Substack), some of which allow comments by readers and some of which don’t.
Providers of the internet’s hardware and software infrastructure (e.g., AT&T, Verizon, GoDaddy, and Microsoft).
All such platforms already have in place mechanisms for deleting material and banning users in violation of terms of service, including sub-rosa terms of service that are anti-conservative. I cannot see how the changes that I have proposed would lead to the bifurcation dreaded by The American Spectator:
“highly moderated” platforms “that would, of course, anger virtually everyone (and conservatives especially), and “those [platforms] that would quickly resemble one’s spam file or an open sewer.”
In fact, nothing would change but the ability to publish and read more content that advances conservative views and scientific evidence against such things as mask-and-vaccinate theater, widespread lockdowns, school closures, “gender affirming care”, the psychological damage wrought by abortion, the foolishness and economic devastation caused by “climate change” hysteria — and on and on and on.
What about “misinformation”? True misinformation is all around us, all the time. The left is a leading purveyor of it. The only way to eliminate misinformation is to cripple the search for truth. That, of course, is precisely what the left wants to do because “misinformation”, as used by the left, really means facts and opinions that threaten leftist dogmas and programs.
47 U.S. Code § 230 - Protection for private blocking and screening of offensive material
(a) Findings
The Congress finds the following:
(1) The rapidly developing array of Internet and other interactive computer services available to individual Americans represent an extraordinary advance in the availability of educational and informational resources to our citizens.
(2) These services offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops.
(3) The Internet and other interactive computer services offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.
(4) The Internet and other interactive computer services have flourished, to the benefit of all Americans, with a minimum of government regulation.
(5) Increasingly Americans are relying on interactive media for a variety of political, educational, cultural, and entertainment services.
(b) Policy
It is the policy of the United States—
(1) to promote the continued development of the Internet and other interactive computer services and other interactive media;
(2) to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation;
(3) to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services;
(4) to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material; and
(5) to ensure vigorous enforcement of Federal criminal laws to deter and punish trafficking in obscenity, stalking, and harassment by means of computer.
(c) Protection for “Good Samaritan” blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).[1]
(d) Obligations of interactive computer service
A provider of interactive computer service shall, at the time of entering an agreement with a customer for the provision of interactive computer service and in a manner deemed appropriate by the provider, notify such customer that parental control protections (such as computer hardware, software, or filtering services) are commercially available that may assist the customer in limiting access to material that is harmful to minors. Such notice shall identify, or provide the customer with access to information identifying, current providers of such protections.
(e) Effect on other laws
(1)No effect on criminal law
Nothing in this section shall be construed to impair the enforcement of section 223 [pertains to obscene or harassing phone calls] or 231 [pertains to restriction of access by minors] of this title, chapter 71 (relating to obscenity) or 110 (relating to sexual exploitation of children) of title 18, or any other Federal criminal statute.
What is an interactive computer service?
The term “interactive computer service” means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.
What is an information content provider?
The term “information content provider” means any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.