Advertisement

California lawmakers are trying to regulate AI before it’s too late. Here’s how

The Apple Intelligence prompts a user to ask for permission to use ChatGPT in this illustration photo.
The Apple Intelligence prompts a user to ask for permission to use ChatGPT.
(NurPhoto via Getty Images)
Share via

For four years, Jacob Hilton worked for one of the most influential startups in the Bay Area — OpenAI. His research helped test and improve the truthfulness of AI models such as ChatGPT. He believes artificial intelligence can benefit society, but he also recognizes the serious risks if the technology is left unchecked.

Hilton was among 13 current and former OpenAI and Google employees who this month signed an open letter that called for more whistleblower protections, citing broad confidentiality agreements as problematic.

“The basic situation is that employees, the people closest to the technology, they’re also the ones with the most to lose from being retaliated against for speaking up,” says Hilton, 33, now a researcher at the nonprofit Alignment Research Center, who lives in Berkeley.

Advertisement

California legislators are rushing to address such concerns through roughly 50 AI-related bills, many of which aim to place safeguards around the rapidly evolving technology, which lawmakers say could cause societal harm.

However, groups representing large tech companies argue that the proposed legislation could stifle innovation and creativity, causing California to lose its competitive edgeand dramatically change how AI is developed in the state.

The effects of artificial intelligence on employment, society and culture are wide reaching, and that’s reflected in the number of bills circulating the Legislature . They cover a range of AI-related fears, including job replacement, data security and racial discrimination.

Advertisement

One bill, co-sponsored by the Teamsters, aims to mandate human oversight on driver-less heavy-duty trucks. A bill backed by the Service Employees International Union attempts to ban the automation or replacement of jobs by AI systems at call centers that provide public benefit services, such as Medi-Cal. Another bill, written by Sen. Scott Wiener (D-San Francisco), would require companies developing large AI models to do safety testing.

An Assembly bill would give actors and artists a way to nullify provisions that allow studios to digitally clone their voices, faces and bodies with AI.

The plethora of bills come after politicians were criticized for not cracking down hard enough on social media companies until it was too late. During the Biden administration, federal and state Democrats have become more aggressive in going after big tech firms.

“We’ve seen with other technologies that we don’t do anything until well after there’s a big problem,” Wiener said. “Social media had contributed many good things to society ... but we know there have been significant downsides to social media, and we did nothing to reduce or to mitigate those harms. And now we’re playing catch-up. I prefer not to play catch-up.”

Advertisement

The push comes as AI tools are quickly progressing. They read bedtime stories to children, sort drive through orders at fast food locations and help make music videos. While some tech enthusiasts enthuse about AI’s potential benefits, others fear job losses and safety issues.

“It caught almost everybody by surprise, including many of the experts, in how rapidly [the tech is] progressing,” said Dan Hendrycks, director of the San Francisco-based nonprofit Center for AI Safety. “If we just delay and don’t do anything for several years, then we may be waiting until it’s too late.”

Washed Out is the first major music artist to commission a music video using OpenAI’s Sora text-to-video technology.

Wiener’s bill, SB1047, which is backed by the Center for AI Safety, calls for companies building large AI models to conduct safety testing and have the ability to turn off models that they directly control.

The bill’s proponents say it would protect against situations such as AI being used to create biological weapons or shut down the electrical grid, for example. The bill also would require AI companies to implement ways for employees to file anonymous concerns. The state attorney general could sue to enforce safety rules.

“Very powerful technology brings both benefits and risks, and I want to make sure that the benefits of AI profoundly outweigh the risks,” Wiener said.

Director Scott Mann had an issue: too many swear words in his thriller movie. Could artificial intelligence help him get a PG-13 rating?

Opponents of the bill, including TechNet, a trade group that counts tech companies including Meta, Google and OpenAI among its members, say policymakers should move cautiously . Meta and OpenAI did not return a request for comment. Google declined to comment.

Advertisement

“Moving too quickly has its own sort of consequences, potentially stifling and tamping down some of the benefits that can come with this technology,” said Dylan Hoffman, executive director for California and the Southwest for TechNet.

The bill passed the Assembly Privacy and Consumer Protection Committee on Tuesday and will next go to the Assembly Judiciary Committee and Assembly Appropriations Committee, and if it passes, to the Assembly floor.

Proponents of Wiener’s bill say they’re responding to the public’s wishes. In a poll of 800 potential voters in California commissioned by the Center for AI Safety Action Fund, 86% of participants said it was an important priority for the state to develop AI safety regulations. According to the poll, 77% of participants supported the proposal to subject AI systems to safety testing.

“The status quo right now is that, when it comes to safety and security, we’re relying on voluntary public commitments made by these companies,” said Hilton, the former OpenAI employee. “But part of the problem is that there isn’t a good accountability mechanism.”

Johansson, who portrayed the voice of a computer program in ‘Her,’ was not behind OpenAI’s ‘Sky’ voice assistant. Another actor provided the voice, OpenAI said.

Another bill with sweeping implications for workplaces is AB 2930, which seeks to prevent “algorithmic discrimination,” or when automated systems put certain people at a disadvantage based on their race, gender or sexual orientation when it comes to hiring, pay and termination.

“We see example after example in the AI space where outputs are biased,” said Assemblymember Rebecca Bauer-Kahan (D-Orinda).

Advertisement

The anti-discrimination bill failed in last year’s legislative session, with major opposition from tech companies. Reintroduced this year, the measure initially had backing from high-profile tech companies Workday and Microsoft, although they have wavered in their support, expressing concerns over amendments that would put more responsibility on firms developing AI products to curb bias.

“Usually, you don’t have industries saying, ‘Regulate me’, but various communities don’t trust AI, and what this effort is trying to do is build trust in these AI systems, which I think is really beneficial for industry,” Bauer-Kahan said.

Hollywood talent agencies and producers have met with AI companies, including ChatGPT maker OpenAI, to learn about how their technologies could be used in entertainment.

Some labor and data privacy advocates worry that language in the proposed anti-discrimination legislation is too weak. Opponents say it’s too broad.

Chandler Morse, head of public policy at Workday, said the company supports AB 2930 as introduced. “We are currently evaluating our position on the new amendments,” Morse said.

Microsoft declined to comment.

The threat of AI is also a rallying cry for Hollywood unions. The Writers Guild of America and the Screen Actors Guild-American Federation of Television and Radio Artists negotiated AI protections for their members during last year’s strikes, but the risks of the tech go beyond the scope of union contracts, said actors guild National Executive Director Duncan Crabtree-Ireland.

“We need public policy to catch up and to start putting these norms in place so that there is less of a Wild West kind of environment going on with AI,” Crabtree-Ireland said.

Advertisement

SAG-AFTRA has helped draft three federal bills related to deepfakes (misleading images and videos often involving celebrity likenesses), along with two measures in California, including AB 2602, that would strengthen worker control over use of their digital image. The legislation, if approved, would require that workers be represented by their union or legal counsel for agreements involving AI-generated likenesses to be legally binding.

Fran Drescher and Duncan Crabtree-Ireland of SAG-AFTRA have addressed concerns about the AI terms in the guild’s tentative deal with the studios.

Tech companies urge caution against overregulation. Todd O’Boyle, of the tech industry group Chamber of Progress, said California AI companies may opt to move elsewhere if government oversight becomes overbearing. It’s important for legislators to “not let fears of speculative harms drive policymaking when we’ve got this transformative, technological innovation that stands to create so much prosperity in its earliest days,” he said.

When regulations are put in place, it’s hard to roll them back, warned Aaron Levie, chief executive of the Redwood City-based cloud computing company Box, which is incorporating AI into its products.

“We need to actually have more powerful models that do even more and are more capable,” Levie said, “and then let’s start to assess the risk incrementally from there.”

But Crabtree-Ireland said tech companies are trying to slow-roll regulation by making the issues seem more complicated than they are and by saying they need to be solved in one comprehensive public policy proposal.

“We reject that completely,” Crabtree-Ireland said. “We don’t think everything about AI has to be solved all at once.”

Advertisement