Advertisement

Editorial: Why California should lead on AI regulation

Two men.
President Biden and California Gov. Gavin Newsom during a discussion about managing the risks of artificial intelligence at an event in San Francisco in June 2023.
(Susan Walsh / Associated Press)
Share via

The release of OpenAI’s ChatGPT in late 2022 was like the shot of a starter pistol, setting off a race among big tech companies to develop more and more powerful generative AI systems. Giants such as Microsoft, Google and Meta rushed to roll out new artificial intelligence tools, as billions in venture capital rolled in to AI startups.

At the same time, a growing chorus of people working in and researching AI began to sound the alarm: The technology was evolving faster than anyone anticipated. There was fear that, in the rush to dominate the market, companies might release products before they are safe.

In the spring of 2023, more than 1,000 researchers and industry leaders called for a six-month pause in the development of the most advanced artificial intelligence systems, saying AI labs were racing to deploy “digital minds” that not even their creators could understand, predict or reliably control. The technology presents “profound risks to society and humanity,” they warned. Tech company leaders urged lawmakers to develop regulations to prevent harm.

Advertisement

Legislation from State Sen. Scott Wiener would introduce standards for product safety testing and liability.

It was in that environment that state Sen. Scott Wiener (D-San Francisco) began talking to industry experts about developing legislation that would become Senate Bill 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. The bill is an important first step in responsible AI development.

While state lawmakers introduced dozens of bills targeting various AI concerns, including election misinformation and protecting artists’ work, Wiener took a different approach. His bill focuses on trying to prevent catastrophic damage if AI systems are abused.

SB 1047 would require that developers of the most powerful AI models put testing procedures and safeguards in place to prevent the technology from being used to shut down the power grid, enable the development of biological weapons, carry out major cyberattacks or other grave harms. If developers fail to take reasonable care to prevent catastrophic harm, the state attorney general could sue them. The bill would also protect whistleblowers within AI companies and create CalCompute, a public cloud computing cluster that would be available to help startups, researchers and academics develop AI models.

Advertisement

When tech companies present their products as sleek autonomous computers, that ignores the labor powering the machines.

The bill is supported by major AI safety groups, including some of the so-called godfathers of AI who wrote a letter to Gov. Gavin Newsom contending, “Relative to the scale of risks we are facing, this is a remarkably light-touch piece of legislation.”

But that hasn’t stopped a tidal wave of opposition from tech companies, investors and researchers, who have argued the bill wrongly holds model developers liable for anticipating harm that users might cause. They say that liability would make developers less willing to share their models, which will stifle innovation in California.

Last week, eight members of Congress from California chimed in with a letter to Newsom urging him to veto SB 1047 if it’s passed by the Legislature. The bill, they argued, is premature, with a “misplaced emphasis on hypothetical risks” and lawmakers should instead focus on regulating uses of AI that are causing harm today, such as the use of deepfakes in election ads and revenge porn.

Advertisement

The future isn’t handing over the keys to computers but, rather, making use of what they can do that we can’t and adequately funding these efforts.

There are plenty of good bills that address immediate and specific misuse of AI. That doesn’t negate the need to anticipate and try to prevent future harms — especially when experts in the field are calling for action. SB 1047 raises familiar questions for the tech sector and lawmakers. When is the right time to regulate an emerging technology? What is the right balance to encourage innovation while protecting the public that has to live with its effects? And can the genie be put back in the bottle after the technology is rolled out?

There are risks to sitting on the sidelines for too long. Today, lawmakers are still playing catch-up on data privacy and attempting to curb harm on social media platforms. This isn’t the first time big tech leaders have publicly professed that they welcome regulation on their products, but then lobbied fiercely to block specific proposals.

Ideally the federal government would lead on AI regulation to avoid a patchwork of state policies. But Congress has proved unable — or unwilling — to regulate big tech. For years, proposed legislation to protect data privacy and reduce online risks to children have stalled out. In the absence of federal action, California, in particular because it’s the home of Silicon Valley, has chosen to lead with first-of-its-kind regulations on net neutrality, data privacy and online safety for children. AI is no different. Indeed, House Republicans have already said they will not support any new AI regulations.

By passing SB 1047, California can pressure the federal government to set standards and regulations that could supersede state regulation and, until that happens, the law could serve as an important backstop.

Advertisement