News in English

Why California’s AI bill could hurt more than it helps

Why California’s AI bill could hurt more than it helps

While the goal of safe AI is crucial, onerous demands and the creation of government bureaucracies are not the solution.

California’s proposed Safe and Secure Innovation for Frontier Artificial Intelligence Models Act attempts to improve safety by requiring developers to certify that their AI models are not dangerous. In truth, the law would slow down critical AI advancements in health care, education, and other fields by discouraging innovation and reducing competition.

Over the past few years, AI has revolutionized diagnostics with algorithms that are increasingly capable of detecting diseases like cancer and heart conditions with unprecedented accuracy. AI-driven tools have streamlined the drug discovery processes, reducing the time and cost of bringing new treatments to market. In education, AI-powered platforms have further personalized learning experiences, adapting to individual students’ needs and improving engagement and outcomes.

Freedom to develop has allowed for rapid experimentation and implementation of AI technologies, leading to remarkable advancements benefiting society. However, many people are concerned about the long-term impacts AI could have.

California Senate Bill 1047, introduced by Sen. Scott Wiener, D-San Francisco, aims to prohibit worse-case harmful uses of AI, like creating or deploying weapons of mass destruction or using AI to launch cyberattacks on critical infrastructure, costing hundreds of millions in damage.

To prevent these doomsday scenarios, the bill would require developers to provide a newly created government agency with an annual certification, affirming that their AI models do not pose a danger. This certification would be provided even before the training of the AI model begins. However, it is difficult to accurately predict all potential risks of a model at such an early stage. Moreover, the responsibility for causing harm should be on the actor who committed the wrongdoing, not the developer of the model. Holding developers responsible for all possible outcomes discourages innovation and unfairly burdens those who may have no control over how their models are used. This extensive compliance is costly, especially for small startups that don’t have legal teams. Developers of AI models are instead likely to leave California for friendlier jurisdictions to conduct their training activities and other operations.

Violations of the law could lead to penalties that could reach up to 30% of the cost to create an AI model. For small businesses, this could mean devastating financial losses. The bill also introduces criminal liability dangers under perjury laws if a developer falsely, in bad faith, certifies their AI model as safe. That may sound straightforward, but the law’s ambiguous framework and unclear definitions put developers at the whims of how state regulators may perceive any glitches in their AI models. In an industry where experimentation and iteration are crucial to progress, such severe penalties could impact creativity and slow down advancements.

While the bill intends to target only large and powerful AI models, it uses vague language that could also apply to smaller AI developers. The bill focuses on models that meet a high threshold of computing power typically accessible only to major corporations with significant resources. However, it also applies to models with “similar capabilities,” broad phrasing could extend the bill’s reach to almost all future AI models.

The bill would also require all covered AI models to have a “kill switch” to shut them down to prevent imminent threats and authorize the state to force developers to delete their models if they fail to meet state safety standards, potentially erasing years of research and investment. While the shutdown requirement might make sense in dangerous situations, it is not foolproof. For instance, forcing a shutdown switch on an AI system managing the electricity grid could create a vulnerability that hackers might exploit to cause widespread power outages. Thus, while mitigating certain risks, this solution simultaneously exposes critical infrastructure to new potential cyberattacks.

While the goal of safe AI is crucial, onerous demands and the creation of government bureaucracies are not the solution. Instead, policymakers should work with AI experts to create environments conducive to its safe growth.

Jen Sidorova is a policy analyst, and Nicole Shekhovstova is a technology policy intern at Reason Foundation.

Читайте на 123ru.net