magic starSummarize by Aili

Why Is Everyone Suddenly Furious About AI Regulation?—Asterisk

🌈 Abstract

The article discusses Senate Bill 1047 introduced in California, which aims to require companies behind the world's largest and most advanced AI models to take steps to guarantee their safety before releasing them to the public. The article addresses various misconceptions about the bill and provides a detailed explanation of what the bill actually entails.

🙋 Q&A

[01] Overview of Senate Bill 1047

1. What does Senate Bill 1047 aim to do?

  • The bill aims to require companies behind the world's largest and most advanced AI models to take steps to guarantee their safety before releasing them to the public.

2. What are the key requirements of the bill?

  • The bill applies to "covered models" - those trained on more than 10^26 flops (a measure of computing power) or projected to have similar performance.
  • For covered models, developers must:
    • Ensure the model cannot be hacked while in their possession
    • Implement a way to shut down the model
    • Follow guidance from NIST and the new Frontier Model Division
    • Implement a safety and security protocol to identify and address hazardous capabilities
  • Developers can apply for a "limited duty exemption" if they can show their model won't have hazardous capabilities.

3. What are considered "hazardous capabilities" under the bill?

  • The bill sets a very high bar for "hazardous capabilities", such as the ability to directly enable the creation or use of weapons of mass destruction, cause at least $500 million in damage through cyberattacks on critical infrastructure, or other threats to public safety of comparable severity.

[02] Misconceptions about the bill

1. Is this an existential threat to California's AI industry?

  • No, the bill has minimal impact on most of California's AI industry. It only applies to the most powerful new AI models, which excludes most current publicly available models.
  • The compliance costs are also relatively low compared to the compute costs of training such models.

2. Does the bill create a new regulatory agency?

  • No, it creates the Frontier Model Division within the California Department of Technology, which has a limited role in issuing guidance and coordinating on safety procedures.

3. Are the burdens overly onerous for small developers, researchers, or academics?

  • No, the substantial burdens only apply if a developer trains a "covered model" from scratch that can't get a limited duty exemption. Derivative models and research on existing models are not affected.

4. Does the bill target open source AI?

  • No, the bill does not ban or effectively outlaw open source AI models. Existing open source models would not count as "covered models", and the bill provides special treatment for open models in terms of the shutdown requirement.

[03] Real problems with the bill

1. The definition of "derivative models" is too broad

  • This could create a loophole where a less capable base model can be significantly improved without being subject to the safety requirements.

2. The baseline for comparing AI-assisted harm is unrealistic

  • The bill compares an AI-assisted human to someone without access to any covered models, but over time covered models will become more widespread.

[04] Conclusion

  • Overall, the bill is an admirable effort to get ahead of the rapid advancements in AI and ensure companies investing heavily in new models are checking for potential catastrophic risks.
  • While the bill is not perfect, the author argues it can be improved with some changes and serves as a good baseline for encouraging basic safety precautions as AI continues to evolve.
Shared by Daniel Chen ·
© 2024 NewMotor Inc.