magic starSummarize by Aili

Bill Text - SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.

๐ŸŒˆ Abstract

The document defines and provides details on various terms and requirements related to the development and use of advanced artificial intelligence (AI) models, particularly those with "hazardous capabilities" that could pose risks to public safety and security.

๐Ÿ™‹ Q&A

[01] Definitions

1. What is an "advanced persistent threat"?

  • An adversary with sophisticated expertise and significant resources that can use multiple attack vectors (cyber, physical, deception) to establish and extend its presence within an organization's IT infrastructure, with the goal of exfiltrating information or undermining/impeding critical aspects of a mission, program, or organization.

2. What is an "artificial intelligence model"?

  • An engineered or machine-based system that infers how to generate outputs to influence physical or virtual environments, and may operate with varying levels of autonomy.

3. What is an "artificial intelligence safety incident"?

  • Incidents such as:
    • A covered model engaging in unauthorized behavior that increases the risk of its hazardous capabilities being used
    • Theft, misappropriation, malicious use, or escape of a covered model's model weights
    • Failure of controls limiting the ability to modify a covered model
    • Unauthorized use of a covered model's hazardous capabilities

4. What is a "covered model"?

  • An AI model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations, and the cost of that computing power would exceed $100 million.

5. What is a "hazardous capability"?

  • The capability of a covered model to be used to enable harms such as:
    • Creating or using weapons of mass destruction
    • Causing at least $500 million in damage through cyberattacks on critical infrastructure
    • Causing at least $500 million in damage through autonomous criminal conduct

[02] Developer Responsibilities

1. What must a developer do before initiating training of a covered model that is not subject to a limited duty exemption?

  • Implement cybersecurity protections, the capability for full shutdown, covered guidance, and a written safety and security protocol.

2. What must a developer do before initiating the commercial, public, or widespread use of a covered model that is not subject to a limited duty exemption?

  • Implement reasonable safeguards to prevent individuals from using the model's hazardous capabilities to cause critical harm, and provide requirements to developers of derivative models.

3. What must a developer do periodically for a nonderivative covered model that is not subject to a limited duty exemption?

  • Reevaluate the procedures, policies, protections, capabilities, and safeguards to ensure they cannot be removed or bypassed.

4. What must a developer report to the Frontier Model Division regarding a covered model?

  • Any artificial intelligence safety incidents affecting the covered model or its derivatives within the developer's custody, control, or possession.

[03] Computing Cluster Operator Responsibilities

1. What must a computing cluster operator do when a customer utilizes compute resources sufficient to train a covered model?

  • Obtain the customer's identifying information, assess if they intend to deploy a covered model, validate the information annually, maintain records, and implement the capability for full shutdown.

[04] Pricing and Access

1. What requirements are placed on developers and computing cluster operators regarding pricing and access?

  • They must provide transparent, uniform, publicly available price schedules and not engage in unlawful discrimination or noncompetitive activity.

[05] Enforcement

1. What remedies can the Attorney General seek for violations of this chapter?

  • Preventive relief, monetary damages, civil penalties, and full shutdown of a covered model.

2. What protections are provided for employees who disclose information about a developer's noncompliance?

  • Developers cannot prevent or retaliate against employees for disclosing noncompliance information to the Attorney General.

</output_format>

Shared by Daniel Chen ยท
ยฉ 2024 NewMotor Inc.