magic starSummarize by Aili

AI companies promised to self-regulate one year ago. What’s changed?

🌈 Abstract

The article discusses the progress made by seven leading AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) on the voluntary commitments they made with the White House a year ago to develop AI in a safe and trustworthy way.

🙋 Q&A

[01] The Voluntary Commitments

1. What were the key points of the voluntary commitments made by the AI companies?

  • The commitments included promises to improve testing and transparency around AI systems, and share information on potential harms and risks.
  • The commitments came at a time when generative AI was rapidly advancing, and there were growing concerns about issues like copyright, deepfakes, and existential risks posed by AI.
  • The voluntary commitments were some of the first prescriptive rules for the AI sector in the US, but they remain voluntary and unenforceable.

2. How have the companies performed on these commitments over the past year?

  • The companies have made some progress, such as implementing more technical fixes like red-teaming and watermarking AI-generated content.
  • However, there are concerns that the companies are still making unsubstantiated claims about their products, and that the commitments have not led to meaningful changes in governance or protection of rights.
  • Without comprehensive federal legislation, the best the US can do is demand that companies follow through on these voluntary commitments, but there are doubts about whether the companies are truly verifying themselves in a rigorous way.

[02] Internal and External Security Testing

1. How have the companies implemented internal and external security testing of their AI systems?

  • All the companies (except Inflection) say they conduct red-teaming exercises, where internal and external testers probe their models for flaws and risks.
  • OpenAI, Anthropic, and Google have also worked with external experts to test their models for a range of threats like cybersecurity, biological, and national security concerns.
  • However, there are concerns that simply reporting on the actions taken is not enough, and more evidence is needed to show that the interventions are actually reducing the intended risks.

[03] Sharing Information Across the Industry

1. How have the companies collaborated to share information on managing AI risks?

  • Anthropic, Google, Microsoft, and OpenAI founded the Frontier Model Forum, a nonprofit that aims to facilitate discussions and actions on AI safety and responsibility.
  • The companies are also part of the Artificial Intelligence Safety Institute Consortium (AISIC), which develops guidelines and standards for AI policy and evaluation.
  • Many of the companies have also contributed to guidance by the Partnership on AI on the deployment of foundation models.
  • However, it's unclear how much of this effort will lead to meaningful changes, and how much is just window dressing.

[04] Protecting Model Weights and Cybersecurity

1. What measures have the companies taken to protect their AI model weights and improve cybersecurity?

  • The companies have implemented various cybersecurity measures, such as encryption, access controls, and initiatives to address cyber threats specific to generative AI.
  • Microsoft, Google, and OpenAI have launched dedicated programs and initiatives to bolster their cybersecurity practices.
  • While the companies have taken steps in this area, there doesn't seem to be a clear consensus on the best way to protect AI models.

[05] Third-Party Vulnerability Reporting

1. How have the companies facilitated third-party discovery and reporting of vulnerabilities in their AI systems?

  • Many of the companies have implemented bug bounty programs, where they reward security researchers who find flaws in their AI systems.
  • Some companies also have forms on their websites where researchers can submit vulnerability reports.
  • However, experts note that third-party auditing of AI systems is a complex socio-technical challenge, and the first companies to implement such audits may set poor precedents.

[06] Watermarking and Provenance of AI-Generated Content

1. What progress have the companies made in developing mechanisms to identify AI-generated content?

  • Many of the companies have built watermarking tools to mark AI-generated content, such as images, audio, and text.
  • Several companies are also part of the Coalition for Content Provenance and Authenticity (C2PA), which embeds information about the origin and creation of content into its metadata.
  • While these technical fixes are a step in the right direction, there are questions about whether they meaningfully address the underlying social concerns around AI-generated content.

[07] Transparency and Reporting on AI Capabilities and Limitations

1. How have the companies fulfilled their commitment to publicly report on their AI systems' capabilities, limitations, and appropriate use cases?

  • The most common approach has been the use of "model cards" or similar product descriptions that provide information on the models' capabilities, limitations, fairness, and other characteristics.
  • Microsoft has also published an annual Responsible AI Transparency Report.
  • However, critics argue that the companies could be more transparent about their governance structures, financial relationships, data provenance, and safety incidents.

[08] Research on Societal Risks of AI

1. What research have the companies conducted on the societal risks of AI systems?

  • The companies have invested heavily in research on topics like avoiding harmful bias and discrimination, protecting privacy, and mitigating other societal risks.
  • They have embedded their research findings into their products, such as building guardrails and safety measures.
  • Critics argue that the focus on safety research takes attention and resources away from research on more immediate harms like discrimination and bias.

[09] Using AI to Address Societal Challenges

1. How have the companies used AI to tackle societal challenges?

  • The companies have deployed AI tools to help with scientific discovery, weather forecasting, climate change mitigation, and other societal challenges.
  • For example, Google DeepMind's AlphaFold 3 can predict the structure and interactions of all life's molecules, and Microsoft has used satellite imagery and AI to map climate-vulnerable populations.
  • While the companies have made progress in this area, the article notes that they have not yet used AI to prevent cancer, which was one of the ambitious goals mentioned in the commitments.
Shared by Daniel Chen ·
© 2024 NewMotor Inc.