magic starSummarize by Aili

Moderating horror and hate on the web may be beyond even AI | John Naughton

๐ŸŒˆ Abstract

The article discusses the history and implications of Section 230 of the Communications Decency Act, which exempted online platforms from liability for user-generated content. It explores the challenges of content moderation faced by these platforms, including the high costs and the outsourcing of this work to low-wage workers in the global south. The article also introduces the concept of cybernetics and Ashby's "law of requisite variety" to analyze the platforms' struggle to keep up with the dynamic complexity of their environments.

๐Ÿ™‹ Q&A

[01] The History and Implications of Section 230

1. What was the problem that led to the creation of Section 230?

  • In the mid-1990s, when the web was young, there was a concern that if ISPs hosted blogs containing illegal or defamatory content, they could be held legally responsible and sued into bankruptcy.
  • To address this, US lawmakers Chris Cox and Ron Wyden inserted 26 words into the Communications Decency Act of 1996, which became Section 230 of the Telecommunications Act.
  • These words stated that online platforms would not be treated as the "publisher or speaker" of user-generated content, exempting them from liability.

2. What were the implications of Section 230?

  • Section 230 led to an exponential increase in user-generated content on the internet, as platforms no longer had to worry about being sued for such content.
  • However, some of this content was "vile, defamatory or downright horrible," leading to public outrage and platforms engaging in "moderation" of the content.

[02] The Challenges of Content Moderation

1. What are the two main problems with content moderation?

  • It is very expensive due to the sheer scale of the problem (e.g., 2,500 new videos uploaded to YouTube every minute, 1.3 billion photos shared on Instagram every day).
  • The "dirty work" of moderation is often outsourced to low-wage workers in poor countries, who are traumatized by having to watch videos of unspeakable cruelty.

2. How are platforms trying to address the challenges of content moderation?

  • Platforms are exploring the use of AI-powered moderation, where vile content is detected and deleted by "relentless, unshockable machines" rather than human moderators.

[03] The Cybernetic Perspective

1. What is the "law of requisite variety" in cybernetics?

  • Developed by British psychologist W. Ross Ashby, the law states that for a system to be stable, the number of states its control mechanism can attain (its variety) must be greater than or equal to the number of states in the system being controlled.

2. How does Ashby's law apply to online platforms?

  • For a platform like Meta (Facebook) with billions of users constantly generating content, Ashby would say they have a "variety-management problem."
  • The platform has two options to deal with this: either choke off the supply of user-generated content (which undermines their business model) or amplify their internal capacity to cope with the torrent of content (which is the challenge of content moderation).

3. How does AI-powered moderation fit into this cybernetic perspective?

  • Even with AI, beating Ashby's law and keeping up with the dynamic complexity of the platform's environment may prove to be a tough proposition.
Shared by Daniel Chen ยท
ยฉ 2024 NewMotor Inc.