California governor vetoes sweeping AI legislation


California governor vetoes sweeping AI legislation

Gov. Gavin Newsom on Sunday vetoed a California artificial intelligence safety bill, blocking the most ambitious proposal in the nation aimed at curtailing the growth of the new technology.

The first-of-its-kind bill, SB 1047, required safety testing of large AI systems, or models, before their release to the public. It also gave the state's attorney general the right to sue companies over serious harm caused by their technologies, like death or property damage. And it mandated a kill switch to turn off AI systems in case of potential biowarfare, mass casualties or property damage.

Newsom said that the bill was flawed because it focused too much on regulating the biggest AI systems, known as frontier models, without considering potential risks and harms from the technology. He said legislators should go back to rewrite it for the next session.

"I do not believe this is the best approach to protecting the public from real threats posed by the technology," Newsom said in a statement. "Instead, the bill applies stringent standards to even the most basic functions -- so long as a large system deploys it."

The decision to kill the bill is expected to set off fierce criticism from some tech experts and academics who have pushed for the legislation. Newsom, a Democrat, had faced strong pressure to veto the bill, which became embroiled in a fierce national debate over how to regulate AI. A flurry of lobbyists descended on his office in recent weeks, some promoting the technology's potential for great benefits. Others warned of its potential to cause irreparable harm to humanity.

California was poised to become a standard-bearer for regulating a technology that has exploded into public consciousness with the release of chatbots and realistic image and video generators in recent years. In the absence of federal legislation, California's Legislature took an aggressive approach to reining in the technology with its proposal, which both houses passed nearly unanimously.

While lawmakers and regulators globally have sounded the alarm over the technology, few have taken action. Congress has held hearings, but no legislation has made meaningful progress. The European Union passed the AI Act, which restricts the use of riskier technology like facial recognition software.

In the absence of federal legislation, Colorado, Maryland, Illinois and other states have enacted laws to require disclosures of AI-generated "deepfake" videos in political ads, ban the use of facial recognition and other AI tools in hiring and protect consumers from discrimination in AI models.

But California's AI bill garnered the most attention, because it focused on regulating the most powerful and ambitious AI models, which can cost more than $100 million to develop.

"States and local governments are trying to step in and address the obvious harms of AI technology, and it's sad the federal government is stumped in regulating it," said Patrick Hall, an assistant professor of information systems at Georgetown University. "The American public has become a giant experimental population for the largest and richest companies in world."

California has led the nation on privacy, emissions and child safety regulations, which frequently affect the way companies do business nationwide because they prefer to avoid the challenge of complying with a state-by-state patchwork of laws.

State Sen. Scott Wiener of San Francisco said he had introduced California's AI bill after talking to local technologists and academics who warned about potential dangers of the technology and the lack of action by Congress. Last week, 120 Hollywood actors and celebrities, including Joseph Gordon-Levitt, Mark Ruffalo, Jane Fonda and Shonda Rhimes, signed a letter to Newsom, asking him to sign the bill.

Newsom said the bill needed more input from AI experts in academia and business leaders to develop a deeper science-backed analysis of the potential for frontier models and their potential risks.

The California governor said that the bill was "well-intentioned" but left out key ways of measuring risk and other consumer harms. He said that the bill "does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data."

Newsom said he had asked several technology and legal scholars to help come up with regulatory guardrails for generative AI, including Fei-Fei Li, a professor of computer science at Stanford; Mariano-Florentino Cuéllar, a member of the National Academy of Sciences Committee on Social and Ethical Implications of Computing Research; and Jennifer Tour Chayes, dean of the College of Computing, Data Science, and Society at University of California, Berkeley.Li, whom Newsom referred to as the "godmother of AI," wrote in an opinion piece last month that the bill would "harm our budding AI ecosystem" and give the biggest AI companies an advantage by penalizing smaller developers and academic researchers who would have to meet testing standards.

OpenAI, Google, Meta and Microsoft opposed the legislation, saying it could stifle innovation and set back the United States in the global race to dominate AI venture capital investors, including Andreessen Horowitz, said the measure would hurt AI startups that didn't have the resources required to test their systems.

Several California representatives in Congress wrote Newsom with warnings that the bill was too hypothetical and unnecessarily put safety standards on a nascent technology. Rep. Nancy Pelosi, the former House speaker, also asked her fellow Democrat to veto the bill.

"While we want California to lead in AI in a way that protects consumers, data, intellectual property and more, SB 1047 is more harmful than helpful in that pursuit," Pelosi wrote in an open letter last month.

Other technologists and some business leaders, including Elon Musk, took the opposite position, saying the potential harms of AI are too great to postpone regulations. They warned that AI could be used to disrupt elections with widespread disinformation, facilitate biowarfare and create other catastrophic situations.

Musk posted last month on X, his social media site, that it was a "tough call" but that "all things considered," he supported the bill because of the technology's potential risks to the public. Last year, Musk founded the AI company xAI, and he is the CEO of Tesla, an electric vehicle manufacturer that uses AI for self-driving.

This month, 50 academics sent a letter to Newsom describing the bill as "reasonable" and an important deterrent for the fast deployment of unsafe models.

"Decisions about whether to release future powerful AI models should not be taken lightly, and they should not be made purely by companies that don't face any accountability for their actions," wrote the academics, including Geoffrey Hinton, a University of Toronto professor known as the "godfather" of AI.

Amba Kak, president of the AI Now think tank and a former adviser on AI to the Federal Trade Commission, said, "When debates about regulating AI get reduced to Silicon Valley infighting, we lose sight of the broader stakes for the public."

Previous articleNext article

POPULAR CATEGORY

corporate

12813

tech

11464

entertainment

15995

research

7394

misc

16829

wellness

12912

athletics

16929