California’s Proposed AI Safety Law Puts Developers at Risk
🌈 Abstract
The article discusses the author's concerns about proposed California regulation SB 1047 and its potential impact on open source and AI innovation. The author argues that the regulation, which aims to ensure the safety of AI models, is flawed and will have unintended consequences that could stifle open source development.
🙋 Q&A
[01] Concerns about SB 1047
1. What are the author's main concerns about SB 1047?
- The author believes SB 1047 makes the fundamental mistake of regulating AI technology instead of AI applications, and thus would fail to make AI meaningfully safer.
- The author argues that the specific mechanisms of SB 1047 are pernicious to open source, as the complex reporting requirements and ambiguous compliance standards will paralyze many teams and lock out open-source contributors who don't have the resources to hire lawyers and consultants.
- The author is concerned that the Frontier Model Division (FMD) created by the bill will be a target for lobbying and regulatory capture, leading to shifting requirements that raise the cost of compliance and further disadvantage open-source projects.
2. What does the author suggest as better approaches to improve AI safety?
- The author would welcome outlawing nonconsensual deepfake pornography, standardizing watermarking and fingerprinting to identify generated content, and investing more in red teaming and other safety research.
[02] Ambiguity and Compliance Challenges
1. What are the key issues the author identifies with the compliance requirements in SB 1047?
- The requirements are vague, ambiguous, and complex, making it very difficult for developers to ensure they are in compliance.
- Developers face significant personal risk, as the certification of compliance requires analysis of potential harms and appropriate protections, which even leading AI researchers disagree on.
- The "reasonableness" standard for compliance is ambiguous and could be interpreted differently by future juries, making it hard for developers to know if their actions today will be deemed reasonable later.
2. How does the author suggest developers might try to avoid perjury charges?
- One way to try to avoid perjury is to show that the developer is relying on expert advice, to demonstrate that they had no intent to lie.
[03] Concerns about Regulatory Capture
1. What are the author's concerns about the Frontier Model Division (FMD) created by SB 1047?
- The small, unelected five-person board of the FMD will be a great target for lobbying and regulatory capture.
- The FMD can arbitrarily change the computation threshold at which fine-tuning a model becomes subject to its oversight, leading to even small teams being required to hire an auditor to check for compliance.
- These provisions create regulatory uncertainty and more opportunities for vested interests to lobby for shifts in the requirements that raise the cost of compliance, locking out many open-source contributors.
2. How does the author view the impact of SB 1047 on open source?
- The author believes there is a fight in California right now for the future health of open source, and the author is dismayed at the concerted attacks on it.
- The author hopes readers will join in speaking out against SB 1047 and other laws that threaten to stifle open source, as open source is a key pillar of AI innovation.