Hypothetical AI election disinformation risks vs real AI harms
๐ Abstract
The article discusses the real-world harms caused by AI systems, such as wrongful convictions, rent hikes, and biased policing, and argues that these harms contribute to the public's distrust in institutions, which in turn fuels the spread of disinformation. The author contends that the focus on hypothetical AI risks, like election interference, distracts from addressing the concrete, widespread issues caused by faulty AI systems.
๐ Q&A
[01] The Harms of AI Systems
1. What are some of the real-world harms caused by AI systems discussed in the article?
- Wrongful convictions of nearly 1,000 British postmasters by a faulty AI fraud-detection system
- Skyrocketing rents due to a landlord price-fixing algorithm
- Biased predictive policing algorithms that disproportionately target Black and brown people
- Facial recognition algorithms that wrongly accuse people of crimes
- Algorithmic scheduling systems that deprive workers of benefits
- AI systems that monitor workers' productivity and behavior, leading to reprisals against unionization efforts
2. How do these AI harms contribute to public distrust in institutions? The article argues that the real-world harms caused by AI systems provide evidence that institutions and the experts running them are untrustworthy. This fuels the public's acceptance of conspiratorial accounts, as the failures of these systems undermine the credibility of the institutions that are supposed to protect the public.
3. What is the author's view on the focus on hypothetical AI risks, like election interference? The author contends that the focus on hypothetical AI risks, like election disinformation, distracts from addressing the concrete, widespread harms caused by faulty AI systems. They argue that tackling these real-world AI harms would do more to address the public's distrust in institutions than focusing on hypothetical risks.
[02] Institutional Failures and Disinformation
1. What is the author's perspective on the relationship between institutional failures and the rise of conspiratorial thinking? The author argues that the reason people accept conspiratorial accounts is because the institutions that are supposed to be defending them are corrupt and captured by actual conspiracies. The long list of AI harms provides evidence that the system cannot be trusted, which contributes to the credibility of conspiratorial claims.
2. How does the author view the common narrative that the electorate is easily misled by disinformation? The author rejects the narrative that the electorate is gullible and easily led astray by disinformation. Instead, they argue that the public's acceptance of conspiratorial accounts is a rational response to the failures of the institutions that are supposed to protect them.
3. What is the author's proposed approach to addressing disinformation? The author suggests that tackling the real-world harms caused by AI systems, rather than focusing solely on hypothetical risks like election interference, would do more to address the public's distrust in institutions and reduce the credibility of conspiratorial claims.