AI Missteps Could Unravel Global Peace and Security
🌈 Abstract
The article discusses how civilian advances in artificial intelligence (AI) could have serious consequences for international peace and security, and how AI practitioners can play critical roles in mitigating these risks. It highlights the need for AI education to include courses on the societal impact of technology, responsible innovation, AI ethics, and governance.
🙋 Q&A
[01] The Risks of Civilian AI Advances
1. What are some of the ways civilian AI advances could threaten peace and security?
- Direct threats, such as the use of AI-powered chatbots to create disinformation for political-influence operations, and the use of large language models to create code for cyberattacks and facilitate the development of biological weapons
- Indirect threats, such as AI companies' decisions about making their software open-source and the conditions under which they do so, which can have geopolitical implications by determining how states or non-state actors access critical technology that could be used to develop military AI applications, potentially including autonomous weapons systems
2. Why is it important for AI practitioners to become more aware of these challenges and their capacity to address them?
- AI practitioners, whether researchers, engineers, product developers, or industry managers, can play critical roles in mitigating the risks of civilian AI advances through the decisions they make throughout the technology's life cycle.
[02] Responsible AI Education
1. What are some of the key elements that should be included in AI education programs to promote responsible innovation?
- Foundational knowledge about the societal impact of technology and the way technology governance works
- Mandatory courses on the societal impact of technology and responsible innovation
- Specific training on AI ethics and governance
- Insights from the social sciences and humanities, in addition to technical knowledge
2. What are some of the challenges in changing the AI education curriculum?
- Modifications to university curricula may require approval at the ministry level in some countries
- Proposed changes can face internal resistance due to cultural, bureaucratic, or financial reasons
- Existing instructors' expertise in the new topics might be limited
3. What are some examples of universities that already offer relevant courses as electives?
- Harvard, New York University, Sorbonne University, Umeå University, and the University of Helsinki
[03] Continuing Education and Stakeholder Engagement
1. Why is continuing education on the societal impact of AI research important for AI practitioners?
- AI is bound to evolve in unexpected ways, and identifying and mitigating its risks will require ongoing discussions involving not only researchers and developers but also people who might be directly or indirectly impacted by its use.
2. How can organizations like IEEE and ACM play a role in establishing continuing education courses on responsible AI?
- They are well-placed to pool information, facilitate dialogue, and establish ethical norms.
3. What are some examples of existing communities and organizations focused on responsible AI and its geopolitical and security implications?
- The AI Now Institute, the Centre for the Governance of AI, Data and Society, the Distributed AI Research Institute, the Montreal AI Ethics Institute, and the Partnership on AI.
4. What are some of the challenges with these existing communities?
- They are currently too small and not sufficiently diverse, as their most prominent members typically share similar backgrounds, which could lead to ignoring risks that affect underrepresented populations.
- AI practitioners might need help and tutelage in how to engage with people outside the AI research community, especially with policymakers, and in articulating problems or recommendations in ways that non-technical individuals can understand.
[04] Regulation and Responsible Innovation
1. What are some recent developments in the global efforts to regulate AI?
- The creation of the U.N. High-Level Advisory Body on Artificial Intelligence and the Global Commission on Responsible Artificial Intelligence in the Military Domain
- The G7 leaders' statement on the Hiroshima AI process
- The British government's hosting of the first AI Safety Summit
2. What is the central question before regulators regarding AI development?
- Whether AI researchers and companies can be trusted to develop the technology responsibly.
3. What is the authors' view on the most effective and sustainable way to ensure AI developers take responsibility for the risks?
- Investing in education to ensure that AI practitioners of today and tomorrow have the basic knowledge and means to address the risks stemming from their work, so they can be effective designers and implementers of future AI regulations.