California's AI bill SB 1047 attempts to avert AI disasters, but Silicon Valley believes it will produce one.

August 14, 2024
Brian

Aside from science fiction films, there is no precedent for AI systems harming humans or being utilized in cyberattacks. However, some lawmakers want to put safeguards in place before bad actors turn that grim future into a reality. SB 1047, a California measure, aims to prevent real-world disasters caused by AI systems before they occur, and it is scheduled for a final vote in the state senate later in August.

While this appears to be a common goal, SB 1047 has sparked outrage from Silicon Valley actors of all sizes, including venture capitalists, huge tech trade groups, researchers, and startup entrepreneurs. Many AI bills are currently being debated across the country, but California's Safe and Secure Innovation for Frontier Artificial Intelligence Models Act has emerged as one of the most contentious. Here is why.

What does SB 1047 accomplish?

SB 1047 aims to prevent massive AI models from being utilized to inflict "critical harms" to humanity.

The bill defines "critical harms" as a bad actor using an AI model to create a weapon that results in mass casualties or instructing one to orchestrate a cyberattack that causes more than $500 million in damages (for comparison, the CrowdStrike outage is estimated to have cost up to $5 billion). The measure holds developers – the corporations that create the models — accountable for following adequate safety protocols to avoid results like this.

Which models and companies are subject to these rules?

SB 1047's regulations would only apply to the world's largest AI models: those that cost at least $100 million and utilize 10^26 FLOPS during training – a significant lot of computing. However, OpenAI CEO Sam Altman stated that GPT-4 cost around this much to train. These levels can be raised as needed.

Few businesses today have created public AI products large enough to match those standards, but digital behemoths like OpenAI, Google, and Microsoft are expected to do so soon. AI models, which are essentially giant statistical engines that discover and forecast patterns in data, have generally become more accurate as they have increased in size, and this tendency is expected to continue. Mark Zuckerberg recently stated that the next iteration of Meta's Llama will require 10 times more computation, putting it under the authority of SB 1047.

When it comes to open source models and their derivatives, the bill states that if another party spends $25 million on development or fine tuning, they become liable for that derivative model, rather than the original developer.

The bill also mandates a safety protocol to prevent misuse of covered AI products, such as a "emergency stop" button that turns off the entire AI model. Developers must also build testing methods to handle the risks posed by AI models, as well as hiring third-party auditors on an annual basis to evaluate their AI safety standards.

The end result must be "reasonable assurance" that following these rules will avert critical harms, rather than absolute certainty, which is obviously impossible to offer.

Who will enforce it, and how?

The guidelines would be overseen by a new California body known as the Frontier Model Division (FMD). Every new public AI model that meets SB 1047's requirements must be independently certified and provide a written copy of its safety protocol.

The FMD would be led by a five-person board nominated by California's governor and legislature, with representation from the AI industry, open source community, and academia. The board will advise California's attorney general on suspected SB 1047 infractions and provide direction to AI model developers on safe practices.

A developer's chief technology officer must provide an annual certification to the FMD that assesses the possible dangers of its AI model, the effectiveness of its safety process, and how the company is complying with SB 1047. Similar to breach notifications, if a "AI safety incident" happens, the developer must notify the FMD within 72 hours of becoming aware of the incident.

If a developer fails to comply with any of these provisions, SB 1047 authorizes the California attorney general to file a civil action against the developer. Penalties for a model that cost $100 million to train may be up to $10 million for the first violation and $30 million for repeated infractions. The penalty rate increases as AI models grow more expensive.

Finally, the law provides whistleblower protections for employees who attempt to expose knowledge about a harmful AI model to the California attorney general.

What do the proponents say?

SB 1047, authored by California State Senator Scott Wiener, who represents San Francisco, tells TechCrunch that it aims to learn from past policy mistakes in social media and data privacy and safeguard individuals before it's too late.

"We have a history with technology of waiting for harms to happen, and then wringing our hands," according to Wiener. "We shouldn't wait for anything horrible to happen. Let's simply get ahead of things."

Even if a company trains a $100 million model in Texas or France, SB 1047 will protect it as long as it conducts business in California. Wiener claims Congress has done "remarkably little legislating around technology over the last quarter century," thus he believes it is up to California to create a precedent.

When asked if he has met with OpenAI and Meta on SB 1047, Wiener responds, "We've met with all of the large labs."

This measure has received support from two AI experts known as the "godfathers of AI," Geoffrey Hinton and Yoshua Bengio. These two are part of an AI community segment that is concerned about the potentially disastrous consequences of AI technology. These "AI doomers" have been around for a while in the research community, and SB 1047 might put some of their recommended precautions into legislation. Another group that supports SB 1047, the Center for AI Safety, released an open letter in May 2023 urging the world to consider "mitigating the risk of extinction from AI" as seriously as pandemics or nuclear war.

"This is in the long-term interest of industry in California and the United States more generally because a major safety incident would likely be the biggest roadblock to further advancement," said Dan Hendrycks, head of the Center for AI Safety, in an email.

Recently, Hendrycks' motivations have been brought into doubt. In July, he formally established Gray Swan, a firm that develops "tools to help companies assess the risks of their AI systems," according to a news release. Following concerns that Hendrycks' startup could benefit if the measure is passed, possibly as one of the auditors SB 1047 compels developers to hire, he sold his stock position in Gray Swan.

"I divested in order to send a clear signal," Hendrycks explained in an email to TechCrunch. "If the billionaire VC opposition to common sense AI safety wants to show their motives are pure, let them follow suit."

What do the opponents say?

An increasing number of Silicon Valley players are opposing SB 1047.

Hendrycks' "billionaire VC opposition" most likely alludes to a16z, the venture firm formed by Marc Andreessen and Ben Horowitz that has vehemently opposed SB 1047. on early August, the venture firm's chief legal officer, Jaikumar Ramaswamy, wrote to Senator Wiener, stating that the measure "will burden startups because of its arbitrary and shifting thresholds," causing a chilling effect on the AI industry. As AI technology progresses, it will become more expensive, implying that more firms will exceed the $100 million threshold and be covered by SB 1047; a16z reports that several of their startups already receive that amount for training models.

Fei-Fei Li, dubbed the "godmother of AI," broke her quiet on SB 1047 in early August, saying in a Fortune essay that the law will "harm our budding AI ecosystem." While Li is a well-known Stanford AI researcher, she also apparently founded an AI business called World Labs in April, which is valued at $1 billion and backed by a16z.

She joins notable AI researchers such as Stanford researcher Andrew Ng, who described the measure as "an assault on open source" during a July Y Combinator event. Open source models may bring additional risks to their developers since, like all open software, they are more easily updated and deployed for arbitrary and potentially harmful objectives.

Meta’s chief AI scientist, Yann LeCun, said SB 1047 would hurt research efforts, and is based on an “illusion of ‘existential risk’ pushed by a handful of delusional think-tanks,” in a post on X. Meta’s Llama LLM is one of the foremost examples of an open source LLM.

Startups are also not happy about the bill. Jeremy Nixon, CEO of AI startup Omniscience and founder of AGI House SF, a hub for AI startups in San Francisco, worries that SB 1047 will crush his ecosystem. He argues that bad actors should be punished for causing critical harm, not the AI labs that openly develop and distribute the technology.

“There is a deep confusion at the center of the bill, that LLMs can somehow differ in their levels of hazardous capability,” said Nixon. “It’s more than likely, in my mind, that all models have hazardous capabilities as defined by the bill.”

However, Big Tech, which is directly affected by the measure, is also concerned about SB 1047. The Chamber of Progress, a trade group that represents Google, Apple, Amazon, and other big internet companies, wrote an open letter opposing the bill, claiming that SB 1047 restricts free expression and "pushes tech innovation out of California." Last year, Google CEO Sundar Pichai and other industry executives supported the concept of federal AI legislation.

Congressman Ro Khanna, who represents Silicon Valley, issued a statement on Tuesday condemning SB 1047. He raised fears that the law "would be ineffective, punishing of individual entrepreneurs and small businesses, and hurt California's spirit of innovation."

Silicon Valley has historically disliked California's broad-based technology regulation. In 2019, Big Tech used a similar strategy when another state privacy bill, California's Consumer Privacy Act, threatened to alter the tech landscape. Silicon Valley pushed against the bill, and months before it went into force, Amazon CEO Jeff Bezos and 50 other executives signed an open letter advocating for a federal privacy law instead.

What will happen next?

SB 1047 will be sent to the California Senate Assembly floor on August 15, along with any amendments that are accepted. According to Wiener, that is where bills "live or die" in the California State Senate. It is expected to pass, given the overwhelming backing of MPs thus far.

Anthropic proposed a number of revisions to SB 1047 in late July, which Wiener says he and California's Senate policy committees are currently evaluating. Anthropic is the first creator of a cutting-edge AI model to publicly indicate a willingness to collaborate with Wiener on SB 1047, despite the fact that it opposes the bill as written. This was widely regarded as a win for the bill.

Anthropic's proposed modifications include eliminating the FMD, limiting the Attorney General's ability to sue AI inventors before harm happens, and eliminating the whistleblower protection provision in SB 1047. Wiener says he generally supports the adjustments, but they must be approved by various Senate policy committees before they can be included in the measure.

If SB 1047 passes the Senate, it will be sent to California Governor Gavin Newsom, who will then decide whether to sign it into law before the end of August. Wiener claims he has not spoken with Newsom about the issue and does not understand his perspective.

This bill would not take effect immediately, as the FMD is scheduled to be constituted in 2026. Furthermore, even if the measure passes, it is quite likely to face judicial challenges before it becomes law, possibly from some of the same groups that are crying out today.