California's AI bill SB 1047 attempts to avert AI disasters, but Silicon Valley believes it will produce one.

August 16, 2024
Harsh Gautam

Outside of science fiction movies, there is no precedent for AI systems killing humans or being utilized in huge cyberattacks. However, some lawmakers want to put safeguards in place before bad actors turn that grim future into a reality. SB 1047, a California measure, aims to prevent real-world disasters caused by AI systems before they occur, and it is scheduled for a final vote in the state senate later in August.

While this appears to be a common goal, SB 1047 has sparked outrage from Silicon Valley actors of all sizes, including venture capitalists, huge tech trade groups, researchers, and startup entrepreneurs. Many AI bills are currently being debated across the country, but California's Safe and Secure Innovation for Frontier Artificial Intelligence Models Act has emerged as one of the most contentious. Here is why.

What would SB 1047 accomplish?

SB 1047 aims to prevent massive AI models from being utilized to inflict "critical harms" to humanity.

The bill defines "critical harms" as a bad actor using an AI model to create a weapon that results in mass casualties or instructing one to orchestrate a cyberattack that causes more than $500 million in damages (for comparison, the CrowdStrike outage is estimated to have cost up to $5 billion). The measure holds developers – the corporations that create the models — accountable for following adequate safety protocols to avoid results like this.

Which models and companies are subject to these rules?

SB 1047's regulations would only apply to the world's largest AI models: those that cost at least $100 million and utilize 10^26 FLOPS during training – a significant lot of computing. However, OpenAI CEO Sam Altman stated that GPT-4 cost around this much to train. These levels can be raised as needed.

Few businesses today have created public AI products large enough to match those standards, but digital behemoths like OpenAI, Google, and Microsoft are expected to do so soon. AI models, which are essentially giant statistical engines that discover and forecast patterns in data, have generally become more accurate as they have increased in size, and this tendency is expected to continue. Mark Zuckerberg recently stated that the next iteration of Meta's Llama will require 10 times more computation, putting it under the authority of SB 1047.

When it comes to open source models and their derivatives, the bill states that the original developer is accountable unless another developer spends three times as much building a derivative of the original model.

The bill also mandates a safety protocol to prevent misuse of covered AI products, such as a "emergency stop" button that turns off the entire AI model. Developers must also build testing methods to handle the risks posed by AI models, as well as hiring third-party auditors on an annual basis to evaluate their AI safety standards.

The end result must be "reasonable assurance" that following these rules will avert critical harms, rather than absolute certainty, which is obviously impossible to offer.

Who will enforce it, and how?

The guidelines would be overseen by a new California body known as the Frontier Model Division (FMD). Every new public AI model that meets SB 1047's requirements must be independently certified and provide a written copy of its safety protocol.

The FMD would be led by a five-person board nominated by California's governor and legislature, with representation from the AI industry, open source community, and academia. The board will advise California's attorney general on suspected SB 1047 infractions and provide direction to AI model developers on safe practices.

A developer's chief technology officer must provide an annual certification to the FMD that assesses the possible dangers of its AI model, the effectiveness of its safety process, and how the company is complying with SB 1047. Similar to breach notifications, if a "AI safety incident" happens, the developer must notify the FMD within 72 hours of becoming aware of the incident.

If a developer fails to comply with any of these provisions, SB 1047 authorizes the California attorney general to file a civil action against the developer. Penalties for a model that cost $100 million to train may be up to $10 million for the first violation and $30 million for repeated infractions. The penalty rate increases as AI models grow more expensive.

Finally, the law provides whistleblower protections for employees who attempt to expose knowledge about a harmful AI model to the California attorney general.

What do the proponents say?

SB 1047, authored by California State Senator Scott Wiener, who represents San Francisco, tells us that it aims to learn from past policy mistakes in social media and data privacy and safeguard individuals before it's too late.

"We have a history with technology of waiting for harm to happen, and then wringing our hands," according to Wiener. "We shouldn't wait for anything horrible to happen. Let's simply get ahead of things."

Even if a company trains a $100 million model in Texas or France, SB 1047 will protect it as long as it conducts business in California. Wiener claims Congress has done "remarkably little legislating around technology over the last quarter century," thus he believes it is up to California to create a precedent.

When asked if he has met with OpenAI and Meta on SB 1047, Wiener responds, "We've met with all of the large labs."

This measure has received support from two AI experts known as the "godfathers of AI," Geoffrey Hinton and Yoshua Bengio. These two are part of an AI community segment that is concerned about the potentially disastrous consequences of AI technology. These "AI doomers" have been around for a while in the research community, and SB 1047 might put some of their recommended precautions into legislation. Another group that supports SB 1047, the Center for AI Safety, released an open letter in May 2023 urging the world to consider "mitigating the risk of extinction from AI" as seriously as pandemics or nuclear war.

"This is in the long-term interest of industry in California and the United States more generally because a major safety incident would most likely be the biggest roadblock to further advancement," said Dan Hendrycks, head of the Center for AI Safety, in an email.

Recently, Hendrycks' motivations have been brought into doubt. In July, he formally established Gray Swan, a firm that develops "tools to help companies assess the risks of their AI systems," according to a news release. Following concerns that Hendrycks' startup may benefit if the measure passes, maybe as one of the auditors SB 1047 compels developers to hire, he sold his stock position in Gray Swan.