close
close

Gottagopestcontrol

Trusted News & Timely Insights

California lawmakers vote on AI safety bill despite opposition from tech companies
Alabama

California lawmakers vote on AI safety bill despite opposition from tech companies

A bill designed to reduce the risk of artificial intelligence being used for nefarious purposes, such as cyberattacks or the perfection of biological weapons, is set to be voted on in the California State Legislature this week.

California Senate Bill 1047, authored by State Senator Scott Wiener, would be the first of its kind in the U.S. to require safety testing by AI companies building large models.

California lawmakers are considering dozens of AI-related bills this session. But Wiener’s proposal, called the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, has drawn national attention as it faces vocal opposition from Silicon Valley, the hotbed of U.S. AI development. Opponents say creating burdensome technical requirements and potential fines in California would effectively stifle the state’s innovation and global competitiveness.

OpenAI is the latest AI developer to voice its opposition. In a letter Wednesday, the company argued that AI regulation should be left to the federal government and claimed that companies would leave California if the bill passed.

The State Assembly is now set to vote on the bill, which Wiener recently amended in response to criticism from the tech industry. However, he says the new wording does not address all of the issues raised by the industry.

“This is a sensible, not too strict bill that will not hinder innovation in any way, but will help us anticipate the risks that any powerful technology brings,” Wiener told reporters during a press conference on Monday.

What would the bill achieve?

The bill would require companies developing large AI models that cost more than $100 million to train to limit any significant risks in the system that they discover during security testing. This includes creating a “full shutdown feature” – or a way to shut down a potentially unsafe model in emergency situations.

Developers would also have to create a technical plan to address security risks and keep a copy of that plan for as long as the model is available, plus five years. Companies like Google, Meta and OpenAI with large AI operations have already pledged voluntary commitments to the Biden administration to address AI risks, but the California legislation would impose legal obligations and enforcement options on them.

Each year, an outside auditor would assess whether the company is in compliance with the law. In addition, companies would be required to document compliance with the law and report any safety incidents to the California Attorney General. The Attorney General’s office could impose civil penalties of up to $50,000 for first-time violations and an additional $100,000 for subsequent violations.

What are the points of criticism?

Much of the tech industry criticized the bill as being too burdensome. Anthropic, an emerging AI company that markets itself as safety-focused, had argued that an earlier version of the bill would have created complex legal obligations that would hinder AI innovation. These included, for example, the ability for the California Attorney General to sue for negligence even if no safety disaster occurred.

OpenAI said that if the law passes, companies will leave California to avoid its requirements. OpenAI also insisted that AI regulation should be left to Congress to avoid a confusing patchwork of laws being passed in states.

Wiener said the idea that companies could move out of California was a “stupid argument,” noting that the law’s provisions would still apply to companies offering services to Californians even if they were not headquartered there.

Last week, eight members of the U.S. Congress urged Gov. Gavin Newsom to veto SB-1047 because it would create obligations for companies that make and use AI. Rep. Nancy Pelosi joined her colleagues in opposition, calling the measure “well-intentioned but ill-informed.” (Wiener has her eye on the Speaker Emerita seat in the House, according to Politico, which could set up a future confrontation with her daughter Christine Pelosi.)

Pelosi and her congressional colleagues are siding with the “godmother of AI,” Dr. Fei-Fei Li, a Stanford University computer scientist and former Google researcher. In a recent opinion piece, Li said the legislation would “harm our burgeoning AI ecosystem,” particularly smaller developers who “are already at a disadvantage compared to today’s tech giants.”

What do supporters say?

The bill has received support from various AI startups, Notion co-founder Simon Last and the “godfathers” of AI, Yoshua Bengio and Geoffrey Hinton. Bengio said the legislation is “a positive and common sense step” to make AI safer while encouraging innovation.

Supporters of the bill fear that uncontrolled artificial intelligence without adequate safeguards could have serious consequences and pose existential threats, such as increased risks to critical infrastructure and the production of nuclear weapons.

Wiener defended his “sensible, low-key” legislation, pointing out that it would only require safety measures from the largest AI companies. He also praised California’s leadership in U.S. technology policy and expressed doubt that Congress would pass substantive AI legislation in the near future.

“California has repeatedly stepped in to protect our citizens and fill the void left by Congressional inaction,” Wiener responded, pointing to the federal government’s lack of action on data privacy and social media regulation.

What happens next?

The latest changes address many of the concerns raised by the AI ​​industry, Wiener said in his recent statement on the bill. The current version imposes civil penalties for lying to the government rather than criminal penalties, as the bill originally proposed, and it also removed a proposal for a new government regulator that would oversee AI models.

Anthropic said in a letter to Newsom that the benefits of the amended legislation likely outweigh the potential harm to the AI ​​industry. The main benefits are transparency to the public about the safety of AI and an incentive for companies to invest in risk mitigation. However, Anthropic is still concerned about the possibility of overly broad enforcement and reporting requirements.

“We believe it is critical to have a framework for managing breakthrough AI systems that roughly meets these three requirements,” Anthropic CEO Dario Amodei told the governor, whether that framework is SB-1047 or not.

California lawmakers have until the end of the legislative session on August 31 to pass the bill. If passed, it would go to Governor Gavin Newsom for final approval by the end of September. The governor has not indicated whether he plans to sign the bill.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *