3rdPartyFeeds

Newsom vetoes US’s first bill aimed at regulating large-scale artificial intelligence

Governor Gavin Newsom vetoed California's controversial AI bill, SB 1047, that would hold companies liable for harm done by large artificial intelligence systems. Read More...

California Governor Gavin Newsom vetoed a first-of-its-kind state bill that would potentially enact the most impactful artificial intelligence regulation in the country.

The measure, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, would require safety measures from companies that spend more than $100 million to train AI models. It aims to prevent potential harms by AI such as mass casualty events, and includes implementing a “kill switch” to completely shut down a rogue model.

California is home to some of AI’s biggest players including: OpenAI, Anthropic, Google (GOOG), and Meta (META). However, in his veto message on Sunday afternoon Newsom said SB 1047 is “well-intentioned,” but it “does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”

Along with the veto, Newsom announced he’s working with leading experts — including “the godmother of AI” Fei-Fei Li — to set guardrails around the deployment of GenAI. He also ordered state agencies to expand their assessment of the risks associated with the technology.

Regulating the technology has been a flashpoint in Silicon Valley and beyond. OpenAI, Google and Meta publicly opposed the bill. Anthropic, backed by Amazon (AMZN), cautiously supported it after suggesting amendments to its original version.

Despite the pushback from big tech, more than 100 current and former employees from Google, Meta, OpenAI, and Anthropic called on Newsom to sign the legislation earlier this month, expressing concerns that “the most powerful AI models may soon pose severe risks.”

More than 125 Hollywood actors, directors and entertainment leaders had also urged Newsom to sign the bill, writing in a letter: “we fully believe in the dazzling potential of AI to be used for good. But we must also be realistic about the risks.”

SB 1047 had to thread the needle between encouraging innovation in the rapidly changing industry while ensuring the tech is used responsibly.

Newsom discussed his concerns over SB 1047 with Salesforce (CRM) CEO, Marc Benioff, at the annual Dreamforce conference earlier this month. “The impact of signing wrong bills over the course of a few years could have a profound impact,” Newsom said referring to the state’s competitiveness.

“This is a space where we dominate and I want to maintain our dominance. I want to maintain our innovation. I want to maintain our ecosystem. I want to continue to lead. At the same time, you feel a deep sense of responsibility to address some of those more extreme concerns that I think many of us have, even the biggest and strongest promoters of this technology have.”

Supporters of the legislation include billionaire tech CEO Elon Musk, who owns large AI model company xAI, along with the so-called “Godfathers of AI”: Yoshua Bengio and Geoffrey Hinton.

California's artificial intelligence bill - SB1047.

California's artificial intelligence bill - SB1047.

California’s artificial intelligence bill – SB1047.

The main author of the bill, California state Senator Scott Wiener, has said it’s a reasonable framework for an under-regulated technology. Wiener has been vocal about the need for a strong federal law that would set nationwide guardrails for all developers.

However, Wiener isn’t hopeful that a national AI safety bill will be a reality anytime soon, calling congress “completely paralyzed when it comes to technology policy” at a press conference last month.

“Let me be clear – I agree with the author – we cannot afford to wait for a major catastrophe to occur before taking action to protect the public,” Newsom wrote, adding “I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of Al systems and capabilities. Ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself.”

The bill was set to reshape the future of AI and numerous tech executives have spoken out.

“There are risks associated with AI, less to do with the models themselves and more to do with what the models are allowed to do in the real world if left completely unsupervised,” Affirm (AFRM) CEO, Max Levchin, told Yahoo Finance at the Goldman Sachs Communacopia and Tech Conference.

“So I’m not diminishing or dismissing the need for controls, and model governance and oversight and thoughtful rule making. I would just not want to ‘shut it all down’ to quote another AI doomerist.”

Although the bill flew past the state assembly 48-16 (7 democrats voted no) and Senate 30-9 (one democrat voted no) in August, it’s faced some political opposition from California Democrats.

Critics of SB 1047 include eight California house members — Ro Khanna, Zoe Lofgren, Anna G. Eshoo, Scott Peters, Tony Cárdenas, Ami Bera, Nanette Diaz Barragan, Lou Correa — and longtime Newsom ally former House Speaker Nancy Pelosi.

In the last month, Newsom has signed 17 AI-related bills aimed at combating deepfake election content, protecting actors and entertainers over their digital likeness, regulating sexually explicit content created by AI among other measures.

The new set of laws will require developers and social media companies to prevent irresponsible use of its platform using deceptive content.

While those legislation address the immediate dangers of AI, SB 1047 looks ahead at some of the most extreme risks posed by advanced models.

While speaking in front of the United Nations General Assembly on Tuesday, President Joe Biden called on world leaders to establish AI standards that protect human life.

“This is just the tip of the iceberg of what we need to do to manage this new technology,” President Biden said. “In the years ahead, there may well be no greater test of our leadership than how we deal with AI.”

Yasmin Khorram is a Senior Reporter at Yahoo Finance. Follow Yasmin on Twitter/X @YasminKhorram and on LinkedIn. Send newsworthy tips to Yasmin: [email protected]

Click here for the latest technology news that will impact the stock market

Read the latest financial and business news from Yahoo Finance

Read More