close
close

Gottagopestcontrol

Trusted News & Timely Insights

California’s AI safety law is under fire. Enshrining it in law is the best way to improve it
Albany

California’s AI safety law is under fire. Enshrining it in law is the best way to improve it

On August 29, the California State Legislature passed Senate Bill 1047 — the Safe Innovations for Breakthrough Artificial Intelligence Models Act — and sent it to Governor Gavin Newsom for his signature. Newsom’s decision, due by Sept. 30, is binary: repeal or make law.

Given the potential harm that advanced AI can cause, SB 1047 requires technology developers to take safeguards when developing and deploying so-called “covered models.” The California Attorney General can enforce these requirements by filing civil lawsuits against parties who fail to “take reasonable care” to ensure that 1) their models do not cause catastrophic harm or 2) their models can be shut down in an emergency.

Many prominent AI companies oppose the bill, either individually or through industry associations. Their objections include concerns that the definition of covered models is too inflexible to account for technological advances; that it is inappropriate to hold them responsible for harmful applications that others develop; and that the bill overall will stifle innovation and hinder small start-ups that do not have the resources to comply with the regulations.

These objections are not taken lightly; they deserve consideration and, most likely, further amendment to the bill. But the governor should sign or approve it anyway, because a veto would signal that no regulation of AI is acceptable now and probably until or unless catastrophic harm occurs. Such a stance is not the right one for governments to take with regard to this technology.

The bill’s author, Senator Scott Wiener (D-San Francisco), engaged with the AI ​​industry on several versions of the bill before it was finally passed. At least one major AI company – Anthropic – requested specific and significant changes to the text, many of which were incorporated into the final bill. Since the legislature passed the bill, Anthropic’s CEO has said that “its benefits are likely to outweigh its costs… (although) some aspects of the bill (still) appear concerning or ambiguous.” Public evidence to date suggests that most other AI companies have chosen to oppose the bill simply on principle, rather than making a concrete effort to change it.

What should we make of this resistance, especially since the executives of some of these companies have publicly expressed their concerns about the potential dangers of advanced AI? In 2023, for example, the CEOs of OpenAI and Google’s DeepMind signed an open letter comparing the risks of AI to pandemics and nuclear wars.

A reasonable conclusion is that, unlike Anthropic, they oppose any kind of binding regulation. They want to reserve the right to decide when the risks of an activity, research project or other model deployed outweigh its benefits. More importantly, they want to make those who develop applications based on their covered models fully responsible for risk mitigation. Recent court cases have recommended that parents who give their children guns bear some legal responsibility for the consequences. Why should AI companies be treated any differently?

The AI ​​companies want the public to give them free rein despite an obvious conflict of interest – profit-oriented companies should not be trusted to make decisions that could affect their profit prospects.

We know this. In November 2023, OpenAI’s board fired its CEO because it determined that the company was heading down a dangerous technological path under his leadership. Within days, various OpenAI stakeholders managed to reverse that decision, reinstate him, and force out the board members who had called for his dismissal. Ironically, OpenAI was specifically structured to allow the board to act as it did – despite the company’s profit potential, the board was supposed to ensure that the public interest came first.

If SB 1047 is vetoed, anti-regulators will declare a victory that proves the wisdom of their position, and they will have little incentive to work on alternative legislation. They benefit from the absence of meaningful regulation, and they will use their veto to maintain the status quo.

Alternatively, the governor could sign SB 1047 into law and openly challenge his opponents to help fix the specific deficiencies. Given what they see as an imperfect law, opponents of the law would have significant incentive to work to improve it—and to do so in good faith. The basic approach, however, would be for industry, not government, to put forward its view of what constitutes reasonable care regarding the safety features of its advanced models. The government’s role would be to ensure that industry does what it believes is right.

The consequences of rejecting SB 1047 and maintaining the status quo are significant: Companies would be able to continue developing their technologies without restrictions. Accepting an imperfect bill would be a significant step toward a better regulatory environment for all parties involved. It would be the beginning rather than the end of the AI ​​regulatory game. This first step sets the tone for what is to come and establishes the legitimacy of AI regulation. The governor should sign SB 1047.

Herbert Lin is a senior fellow at Stanford University’s Center for International Security and Cooperation and a fellow of the Hoover Institution. He is the author of Cyber ​​​​Threats and Nuclear Weapons.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *