6

Section 230’s Fate Belongs With Congress—Not the US Supreme Court

 1 year ago
source link: https://www.wired.com/story/section-230-scotus-gonzalez-google/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Section 230’s Fate Belongs With Congress—Not the US Supreme Court

A case heading to SCOTUS claims platforms should be held responsible for their algorithmic recommendations. A history of the statute suggests otherwise.
Photo collage of US Senator Ron Wyden a laptop the Supreme Court building and a hand holding a burning piece of paper
Photo-illustration: WIRED Staff; Getty Images

In the nearly 27 years since the United States Congress passed Section 230 of the Communications Decency Act, courts have broadly interpreted it to protect online communities for being legally responsible for user content, laying the foundation for the business models of Facebook, Yelp, Glassdoor, Wikipedia, community bulletin boards, and so many other sites that rely on content they don’t create.

Some of those protections are at risk in the next year, as the Supreme Court has agreed to hear its first case interpreting the scope of Section 230’s protections. In Gonzalez v. Google, the plaintiffs ask the court to rule that Section 230 does not immunize platforms when they make “targeted recommendations” of third-party content.

Section 230, written in 1995 and passed in early 1996, unsurprisingly does not explicitly mention algorithmic targeting or personalization. Yet a review of the statute’s history reveals that its proponents and authors intended the law to promote a wide range of technologies to display, filter, and prioritize user content. This means that eliminating Section 230 protections for targeted content or types of personalized technology would require Congress to change the law. 

Like many Section 230 cases, Gonzalez v. Google involves tragic circumstances. The plaintiffs are the family members and estate of Nohemi Gonzalez, a California State University student who, while studying abroad in Paris, was killed in the 2015 ISIS shootings along with 128 other people. The lawsuit, filed against Google, alleges that its subsidiary YouTube violated the Anti-Terrorism Act by providing substantial assistance to terrorists. At the heart of this dispute is not merely that YouTube hosted ISIS videos, but, as the plaintiffs wrote in legal filings, YouTube’s targeted recommendations of ISIS videos. “Google selected the users to whom it would recommend ISIS videos based on what Google knew about each of the millions of YouTube viewers, targeting users whose characteristics indicated that they would be interested in ISIS videos,” the plaintiffs wrote. In other words, YouTube allegedly showed ISIS videos to those more likely to be radicalized.

Last year, the US Court of Appeals for the Ninth Circuit rejected this argument due to Section 230. Yet the court was not enthusiastic in ruling against the Gonzalez family, with Judge Morgan Christen writing for the majority that despite its ruling, “we agree the Internet has grown into a sophisticated and powerful global engine the drafters of § 230 could not have foreseen.” And the court was not unanimous, with Judge Ronald Gould asserting that Section 230 does not immunize Google because its amplification of ISIS videos contributed to the group’s message (Section 230 does not apply if the platform even partly takes part in the development of content). “In short, I do not believe that Section 230 wholly immunizes a social media company’s role as a channel of communication for terrorists in their recruiting campaigns and as an intensifier of the violent and hatred-filled messages they convey,” Gould wrote. After the Ninth Circuit largely ruled against the Gonzalez family, the Supreme Court this year agreed to review the case.

Section 230 was a little-noticed part of a major 1996 overhaul of US telecommunications laws. The House of Representatives added Section 230 to its telecommunications bill largely in response to two developments. First, the Senate’s version of the telecommunications bill imposed penalties for the transmission of indecent content. Section 230 was touted as an alternative to the Senate’s censorious approach, and as a compromise, both the House’s Section 230 and the Senate’s anti-indecency provisions ended up in the bill that President Bill Clinton signed into law. (The next year, the Supreme Court would rule the Senate’s portion unconstitutional.)

Second, Section 230 tried to solve a problem highlighted in a 1995 ruling in a $200 million defamation lawsuit against Prodigy, brought by a plaintiff who said that he was defamed on a Prodigy bulletin board. A New York trial court judge ruled that because Prodigy had reviewed user messages before posting, used technology that prescreened user content for “offensive language,” and engaged in other moderation, its “editorial control” rendered it a publisher that faced as much liability as the author of the posts. A few years earlier, a New York federal judge had reasoned that because CompuServe did not exert sufficient “editorial control,” it was considered a “distributor” that was liable only if it knew or had reason to know of the allegedly defamatory content.

Together, the Prodigy and CompuServe cases meant that online services might receive less protection from lawsuits if they moderated content or provided users with the technology to block inappropriate materials. This was at the height of the nationwide panic about children accessing pornography on this new thing called the internet, with Time magazine running its infamously creepy “Cyberporn” cover story.

Section 230, as introduced by Republican Chris Cox and Democrat Ron Wyden, tried to address these concerns with two main provisions. The first part—at issue in the Gonzalez case—states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This avoids the outcome in Prodigy, in which a platform is considered to be a “publisher” that is just as responsible as the author of the content merely because the platform exerted “editorial control.” Courts would also soon read this provision to provide even stronger protection than CompuServe received: Under Section 230, a platform is shielded from claims even if the platform is informed that the user content is defamatory and refuses to delete it. The second part of Section 230 says that providers can take “good faith” voluntary actions to restrict access to objectionable content without worrying that these attempts at moderation will open them up to liability. (The First Amendment also would be a strong shield against claims arising from such voluntary actions.)

The thrust of Section 230 is that companies, rather than regulators or courts, are best positioned to figure out how to make the internet safe, including through new technology. The bill contains a section of congressional findings, including that internet services “offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops.” And crucially, Section 230 also states that it is intended “to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material.”

In a July 1995 report, a working group advising Cox and Wyden and led by the Center for Democracy and Technology wrote that “relying on user control is a real alternative to the draconian approach now being considered and sure to be proposed again and again.” In one of the few news articles published about Section 230 when it was proposed, syndicated newspaper columnist Charles Levendosky wrote that the bill “puts the responsibility squarely where it belongs, on the user.”

Section 230 did not receive a congressional hearing in 1995, and warranted only a brief discussion on the House floor before a 420-4 vote to add it to the House version of the telecommunications bill. “We want to encourage people like Prodigy, like CompuServe, like America Online, like the new Microsoft network, to do everything possible for us, the customer, to help us control, at the portals of our computer, at the front door of our house, what comes in and what our children see,” Cox said. “This technology is very quickly becoming available, and in fact every one of us will be able to tailor what we see to our own tastes.” Likewise, Wyden said that “we have the opportunity to build a 21st-century policy for the internet, employing the technologies and the creativity designed by the private sector.”

Much has changed since 1995. The companies that Cox and Wyden had in mind did curate and sort content—for instance, by prioritizing the lists of forums that users could choose on CompuServe—but those choices were presumably made by human moderators or designers. Other technologies that Cox and Wyden had in mind, like NetNanny, used crude algorithms to assess and block content. None of this matches the sophistication of the algorithm that YouTube and other platforms use now. But the point of Section 230 was to encourage companies to develop newer mechanisms to determine what content users do—and do not—see on their screens. A motivation behind Section 230 was to allow platforms to experiment with technology and find new ways to display user content.

Recognizing the broad immunity that Cox and Wyden created, courts have consistently concluded that Section 230 protects platforms even if they display only a subset of content that users submit. This is true even in cases involving loathsome defendants. For instance, in 2014, the US Court of Appeals for the Sixth Circuit ruled that Section 230 applied to The Dirty, a gossip website that selected and published only 150 to 200 user posts out of the thousands of submissions it received each day. The court refused to hold The Dirty liable for alleged defamation in user submissions “simply because those posts were selected for publication.” Nothing in Section 230’s history suggests that different principles should apply to user content that is curated via technology that personalizes recommendations.

Increasing the liability for websites that prioritize or personalize content might lead to more harmful content, not less. We likely would see more platforms resort to purely reverse chronological feeds, or avoid technology that automatically filters harmful content or spam. It is hard to imagine search engines—or even a search function for finding content on a single site, like YouTube or Etsy—existing under that rule.

See What’s Next in Tech With the Fast Forward Newsletter

From artificial intelligence and self-driving cars to transformed cities and new startups, sign up for the latest news.
By signing up you agree to our User Agreement (including the class action waiver and arbitration provisions), our Privacy Policy & Cookie Statement and to receive marketing and account-related emails from WIRED. You can unsubscribe at any time.

That is not to say that Section 230 is impermeable. Section 230 never has immunized platforms from federal criminal law enforcement, including any actions that the US Justice Department could bring under terrorism laws if it believed the companies were violating them. Nor do courts allow platforms to avoid liability for content they create. And courts are increasingly willing to find that Section 230 does not apply in product liability cases that seek to hold platforms liable for inherently dangerous services, such as a Snapchat “speed filter” that has been implicated in numerous high-speed crashes of young drivers. A fair reading of Section 230 could lead to liability for platforms in such cases. The same cannot be said of claims that arise from the mere use of personalization and algorithms to deliver content that others create.

Devastating cases such as Gonzalez v. Google bring long-overdue attention to Section 230, and challenge us to have a national conversation about tough questions involving free speech, content moderation, and online harms. Perhaps Congress will determine that too many harms have proliferated under Section 230, and amend the statute to increase liability for algorithmically promoted content. Such a proposal would face its own set of costs and benefits, but it is a decision for Congress, not the courts.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK