Destroy AI
๐ Abstract
The article discusses the author's shift in perspective away from the discourse of measuring and fixing unfair algorithmic systems, towards advocating for the moral justification of sabotaging, circumventing, and destroying AI, machine learning systems, and their surrounding political projects as valid responses to harm. The author argues that human-centered design (HCD) and the field of human-computer interaction (HCI) have failed to meaningfully address the harms caused by hegemonic algorithmic systems, and that more forceful action, including the active undermining and sabotage of these systems, is necessary.
๐ Q&A
[01] The author's shift in perspective
1. What is the author's main argument in the article?
- The author is gravitating away from the discourse of measuring and fixing unfair algorithmic systems, and is instead focused on articulating the moral case for sabotaging, circumventing, and destroying "AI", machine learning systems, and their surrounding political projects as valid responses to harm.
- The author believes that human-centered design (HCD) and the field of human-computer interaction (HCI) have failed to meaningfully address the harms caused by hegemonic algorithmic systems, and that more forceful action, including the active undermining and sabotage of these systems, is necessary.
2. What are the key reasons the author provides for this shift in perspective?
- The author feels that with the hegemonic power of algorithmic systems and the overwhelming power of capital pushing these technologies, human-centered design and HCI have reached a state of abject failure.
- The author argues that CHI and FAccT conferences have failed to meaningfully respond to the deskilling of creative labor, the environmental or humanitarian crises these systems cause, or the co-opting of these spaces by grifters and con artists.
- The author is no longer interested in encouraging the design of more human-centered versions of these "murderous technologies" or informing the more humane administration of complex algorithmic systems that facilitate violence.
[02] The author's call to action
1. What is the author's proposed course of action?
- The author believes we must forcefully put on the table the possibility that we will destroy systems that fail to make a compelling affirmative case for their existence, and that this threat must be credible.
- The author argues we should actively undermine and sabotage these systems, and recognize that labor as a moral project, similar to how luddites sabotaged machinery that tore people apart.
2. Who does the author direct this call to action towards?
- The author states this is not really a post or conversation for the HCD or HCI community, as the author has grown weary of the design community's ultimate commitment to designing systems, which the author sees as antithetical to this proposed course of action.
- The author instead directs challenging questions to those who identify as part of the "design" community, asking them to reflect on whether they work with systems or people, and what they would do if the two paths diverge or are in conflict.