close
close

Gottagopestcontrol

Trusted News & Timely Insights

AI companies in turmoil as California tackles regulation and growth in tech sector
Alabama

AI companies in turmoil as California tackles regulation and growth in tech sector

There is disagreement in Silicon Valley over a groundbreaking bill in California that could radically change the pace of AI development everywhere.

California Senator Scott Wiener introduced SB1047 in February, Since then, the bill has gained support from lawmakers from both parties.

It passed the Senate Privacy and Consumer Protection Committee in June and the Assembly Budget Committee last year. week. The General Assembly is expected to vote on the bill by the end of the month.

California has established itself as a pioneer in the global AI arms race: According to a report by the Brookings Institute, 35 of the 50 largest AI companies are based in this state.

California Governor Gavin Newsom has worked to advance his state’s status as a global AI pioneer. Earlier this year, California unveiled a training program for state employees, hosted a generative AI summit and launched pilot projects to explore how the technology can address challenges like traffic congestion and language accessibility.

This month, the state announced a partnership with chipmaker Nvidia to train state residents on how to use the technology to create jobs, spur innovation and use AI to solve everyday problems.

But for all the excitement about AI’s potential, there is also some concern about its danger to humanity. This means California must walk a fine line between regulating the AI ​​industry and stifling its hoped-for growth.

At an AI event in San Francisco in May, Newsom said, “If we regulate too much, if we indulge too much, if we chase the shiny object, we could put ourselves in a dangerous position.”

Newsom signed an executive order in September that included several provisions for the responsible use of the technology and directed state agencies to examine its optimal use.

Newsom has not commented publicly on SB 1047. The governor’s office did not respond to a request for comment from Business Insider. But if the bill becomes law, it would be the most comprehensive attempt yet to regulate the AI ​​industry.

What SB 1047 would change

The bill’s authors say its goal is to “ensure the safe development of large-scale artificial intelligence systems by establishing clear, predictable, and reasonable safety standards for developers of the largest and most powerful AI systems.”

This would apply to companies that develop models that cost over $100 million to train or that require high computing power. These companies would have to test new technologies for security before making them publicly available. They would also have to build the ability to “completely shut down” into their models and be held accountable for all technology applications.

The bill also sets legal standards for AI developers: It outlines “unlawful acts” for which the California Attorney General could sue companies, regulates whistleblower protections, and establishes a panel to set computational limits for AI models and issue regulatory guidelines.

What Big Tech thinks

The developers who will most likely be affected by the bill were not exactly happy about it.

Companies like Meta, OpenAI and Anthropic, which invest millions in building and training large language models, have been lobbying state legislatures for changes. Meta said the original legislation would hamper innovation and make it less likely that companies would allow their model to be open source.

“The bill also actively discourages the release of open source AI, as vendors would have no way to open source models without exposing themselves to untenable legal liability,” Meta wrote in a letter to Wiener in June. This will likely impact the small business ecosystem, as it reduces the likelihood that they will use “free, readily available, and finely tuned models to create new jobs, businesses, tools, and services that are often used by other companies, governments, and civil society groups.”

Anthropic, which has positioned itself as a safety-conscious AI company, was not happy with early versions of the bill and lobbied lawmakers for changes. The bill calls for a greater focus on preventing companies from building unsafe models rather than enforcing strict laws before catastrophic incidents occur. It also proposes that companies that reach the $100 million threshold be allowed to set their own standards for safety testing.

The bill has also drawn criticism from venture capitalists, executives and other members of the technology industry. Anjney Midha, a general partner at Andreessen Horowitz, called it “one of the most anti-competitive proposals I’ve seen in a long time.” He believes lawmakers should focus on “regulating certain high-risk applications and malicious end users.”

California lawmakers have made a handful of the proposed changes—and all of them made it into the latest version of the law. The updated version now prevents the California Attorney General’s Office from suing AI developers before a catastrophic event occurs. And while the bill originally called for a new government agency to oversee the enforcement of new laws, that has since been scaled back to a panel within the state’s Government Operations Agency.

A spokesperson for Anthropic told BI that they would “review the new wording of the bill as soon as it is available.” Meta and OpenAI did not respond to a request for comment.

Smaller founders are worried, but also somewhat more optimistic.

Arun Subramaniyam, founder and CEO of enterprise-focused generative AI company Articul8, told BI that no one understands the “broad powers” of the new board or the way its directors will be appointed. He also believes that the bill will not only affect Big Tech companies, as even well-funded startups could reach the $100 million training threshold.

At the same time, he said he supported the introduction of the bill a public cloud computing cluster, CalCompute, dedicated to researching the secure deployment of AI models at scale. This cluster could provide a more level playing field for researchers and groups that don’t have the means to evaluate AI models themselves. He also believes that reporting requirements for developers will increase transparency, which will benefit Articul8’s work in the long run.

He said the future of the industry depends on how general-purpose models are regulated. “It’s good that the government is looking at regulation so early, but the wording needs to be a little more balanced,” he said.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *