Senate Bill 1047 will crush AI innovation in California
There are many bad AI laws passing legislative bodies, but few are more harmful or ill-considered than California’s Senate Bill 1047. Titled the Safe and Secure Frontier Artificial Intelligence Act, the bill passed the state Senate in May and is scheduled for consideration by the Assembly in August. Unless it is substantially amended, it will crush AI innovation, create a rapacious new state regulator, and usurp Congress’ role in regulating interstate commerce.
AI innovation has already kicked off a revolution that may be as significant for our species as the Industrial or Digital Revolutions — if it’s allowed to thrive. In the healthcare sector alone AI is poised to transform the playing field as it can better monitor health and fitness and empower medical devices like artificial pancreases to manage diabetes. In addition a 2018 McKinsey study found that AI could add $13 trillion in global GDP by 2030.
SB1047 stands to threaten all of this potential. One of the key provisions of the bill would impose liability on the biggest AI developers (those who build systems with large amounts of computing power and cost over $100 million to train) for what users do with their products. Developers would have to certify that their products could not cause more than $500 million in damage to critical infrastructure. How could any developer make such a certification when there are so many novel cases? And what exactly does “cause” mean?
Developers and venture capitalists have made it clear that this bill would pose a threat to the burgeoning industry’s ability to innovate.
“It’s hard to understate just how blindsided startups, founders, and the investor community feel about this bill,” said a16z General Partner Anjney Midha on the company’s podcast.
Meta, one of the most well-capitalized AI developers, agreed. In a letter to the bill’s sponsor, state Sen. Scott Wiener, Deputy Chief Privacy Officer Rob Sherman charged legislators with fundamentally misunderstanding how AI systems are built, saying the bill “therefore would deter AI innovation in California at a time when we should be promoting it.” Making those who built the system liable for the actions of deployers, rather than focusing on the deployers themselves, will chill major investment in these expensive technologies.
Crushing innovation in California, currently the center of the new revolution, will have a ripple effect that will harm innovation elsewhere. If Congress does not prevent regulators from overstepping, it will effectively cede the right to regulate AI to California, a move which will be felt across the nation.
To enforce the new regime, SB 1047 creates the new Frontier Model Division within the state Department of Technology. This rapacious new regulatory body would be funded by fees and fines it collects in the course of enforcing the proposed new law. This comes at a time when California’s tax revenue is more than spoken for, with deficits running in excess of $30 billion.
This means the Division has no choice but to levy heavy fines and find creative applications of the law to raise revenue. Much like Alexander the Great abolishing taxes to secure his throne and then needing to conquer ever more territory to fund the Macedonian state, the Frontier Model Division would need to find new violators to pay operating costs, whether those violations exist or not. No government agency should operate under this incentive.
Then there is the constitutional concern. Though AI models may be developed in Silicon Valley, they do not stay there and will be increasingly developed across multiple states. AI development is clearly interstate commerce, which states do not have the right to regulate. Without access to classified information, is the Frontier Model Division even equipped to regulate threats to critical infrastructure? The state of California would be claiming a number of national security authorities states were never intended to have and are not positioned to exercise.
California should not take Congress’s inaction on AI regulation as an invitation to regulate the market in their place. It isn’t as if Congress is doing nothing; they are merely proceeding with the caution and prudence the California Senate seems to be lacking. Rather than stand humbly by, Congress should pass a preemption bill that prevents bad state laws like this from stopping AI innovation before it can get started like it did with the Internet Tax Freedom Act in 1998. Otherwise, California will effectively set the law for all 50 states.
Unless substantial changes to the structure of the proposed Frontier Model Division and the liability imposed on developers for the actions of users, SB 1047 will significantly delay AI development. Lifesaving treatments could be delayed, and innovation could stall across sectors. California legislators should think long and hard before rubber-stamping this destructive proposal.
James Erwin is a Young Voices contributor who works on tech and telecom policy in Washington, D.C.