Newsom Intelligently Wields Veto Pen

The news about Gov. Gavin Newsom’s veto of a far-reaching artificial intelligence bill was overheated and somewhat entertaining. “How California politics killed a nationally important AI bill,” blared the headline in a story this week in Politico. The subhead was even more overwrought: “The demise of California’s big swing at artificial intelligence safety underscored the powerful forces arrayed against regulation that’s seen as going too far.”

Actually, the governor detailed a rationale in a long veto message. For starters, he sensibly considered the impact on one of his state’s emergent industries:

California is home to 32 of the world’s 50 leading Al companies, pioneers in one of the most significant technological advances in modern history. We lead in this space because of our research and education institutions, our diverse and motivated workforce, and our free-spirited cultivation of intellectual freedom.

I’m not sure that our regulation-crazy state fosters free-spirited pursuits of innovation, but I’ll grant him some creative license. Nevertheless, Senate Bill 1047 would have become the de facto national standard. Instead of trying to protect the public from the misuse or ill side effects of AI technology, it would have inserted the state government into dictating the specifics of model development. It also would have created another bureaucracy.

The legislation suffered from the conceit of all California “landmark” legislating, as it empowered lawmakers to dictate how a company developed a model, even though it’s unlikely any of the regulators would have sufficient expertise to do so. In my newspaper column, I quoted Assemblymember Jim Wood, D-Healdsburg: “I’ll admit I don’t know a lot about AI … very little as a matter of fact … I like the way I may be doing this wrong, better than nobody else is doing anything at all.”

I don’t mean to pick on Wood, who at least copped to his limited understanding, but arguing that the Legislature should do something — even if it does it wrong — because no one else is doing anything seems like a misguided approach to rule-making. That’s especially true given that if California gets it wrong, it would derail a burgeoning technology that offers many potentially upsides in addition to posing some obvious and reasonable concerns.

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act would “require that a developer, before beginning to initially train a covered model, as defined, comply with various requirements, including implementing the capability to promptly enact a full shutdown.” Lawmakers’ effort to stop a catastrophic event seem rather far-fetched, and they downplayed the many upsides of AI technology.

“SB 1047 is designed to limit the potential for ‘critical harm’ which includes ‘the creation or use of a chemical, biological, radiological or nuclear weapon in a manner that results in mass casualties.’ These harms are theoretical,” according to a letter from prominent tech groups urging a veto. “In contrast, the damage to California’s innovation economy is all too real. SB 1047 would introduce burdensome compliance costs and broad regulatory uncertainty as to which models are in scope.”

The specific problem relates to the Legislature’s fundamental approach. “At root, SB 1047 violated a core tenet of smart technology policy: Regulation should not bottle up underlying system capabilities; instead, it should address real-world outputs and system performance,” argues my R Street Institute colleague Adam Thierer, an expert in AI technology. 

In other words, regulations should focus on demonstrable harmful effects rather than preemptive design rules. They should, as U.S. Rep. Jay Obernolte, R-Calif., explained, focus on likely problems rather than neurotic fears from science fiction: “Rather than an army of robots with red laser eyes rising up to take over the world, the real risks of AI include hazards such as deep fakes, unlimited government surveillance and manipulation of public opinion by malign foreign actors.” The rules also need to prioritize freedom and innovation.

The governor seemed to recognize these points, although he vowed to continue searching for other forms of AI regulation. So we’ll have to wait and see what comes up next year. He also signed some dubious AI-related bills this session, but at least, as a Southern California News Group editorial explained, those laws focus on actual problems rather than imaginary ones. 

Other media outlets were aghast at Newsom’s veto, but many of those outlets have an almost magical faith in government intervention. Similar to Politico, the Washington Post portrayed the veto as “a major win for tech companies and venture capitalists who had lobbied fiercely against the legislation.” It was a win for them, but this isn’t so much a political story about lobbying and special interests as it is about the way a free society ought to approach promising new innovations. 

Do we follow the lead of the European Union and stifle new technologies in their infancy or do we let them grow and prosper and try to limit the occasional ill effects that might result? Do we allow the nation’s most progressive Legislature to create a standard for everyone, or do we let the market work, more or less? Whatever Newsom’s reasoning, we should all be relieved by his veto.

Steven Greenhut is Western region director for the R Street Institute. Write to him at sgreenhut@rstreet.org.

READ MORE:

A Troubling Preview of Harris’ Housing Policies

Time for a Teachable Moment on Tariffs

Watch Out for Rent-Control Madness

The post Newsom Intelligently Wields Veto Pen appeared first on The American Spectator | USA News and Politics.

Читайте на 123ru.net