News in English

How Will A.I. Regulation Move Forward in the US After California’s SB 1047 Veto?

On Sept. 29, California Governor Gavin Newsom (D) vetoed a controversial piece of policy about A.I. The bill, whose full name is the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, dubbed SB 1047, sought to require companies developing sophisticated A.I. models to comply with a range of safety guidelines, even authorizing potential criminal or civil action against A.I. developers. Critics took issue with the bill’s focus on catastrophic harms that could disproportionately affect small developers.

“While well-intentioned, SB 1047 does not take into account whether an A.I. system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions—so long as a large system deploys it,” Newsom wrote in his veto letter. “I do not believe this is the best approach to protecting the public from real threats posed by the technology.” 

However, the veto doesn’t mean A.I. regulation is at a total halt in California or the broader U.S. In fact, in the same breath that Newsom vetoed SB 1047, he signed into law AB 2013, which requires transparency around what data a developer uses to train a generative A.I. system or service. 

Meanwhile, the 2024 legislative session was busy with A.I. bills, with 45 states (plus Washington D.C., Puerto Rico and the U.S. Virgin Islands) introducing A.I. bills during that time. For example, the Colorado AI Act requires developers of high-risk A.I. systems to avoid algorithmic discrimination. It was the first of its kind to be signed into law in the U.S., even putting it ahead of the European Artificial Intelligence Act (AI Act), which took effect in August. 

“I am sure that this particular piece of legislation is going to come back at the next legislative session,” Tatiana Rice, deputy director for U.S. Legislation at Future of Privacy Forum, a nonprofit, non-partisan think tank, said of SB 1047. “Next year, perhaps the idea of regulating frontier foundation models will be more ripe when there is more standardization on the federal or international level.”

On the federal level, any regulatory movement will depend largely on who’s the next U.S. President. “Who wins the election will determine where A.I. policy is headed on the federal level, which will, of course, impact where the states want to fill in gaps,” Rice said. 

Ashley Casovan, managing director of the AI Governance Center at the International Association of Privacy Professionals (IAPP), anticipates more sector-specific regulation to come in California and beyond. “What is acceptable for A.I. being used to diagnose a health condition should be different from A.I. being used to power driverless cars,” Casovan told Observer. 

Many organizations with which the IAPP works agree that regulation should allow for innovation but provide more clarity on oversight. Casovan said this is of particular interest for “those who will be the ones using and deploying A.I. as opposed to building them,” a camp that many companies fall into.

As for enforcement, attorney generals in states with A.I. bills are playing a different game. Craig Smith, an intellectual property attorney at Lando & Anastasi, LLP, worries about how a patchwork of A.I. legislation without federal guidance might play out in implementation and enforcement. “Individual states could impose different and potentially inconsistent obligations on A.I. development and use,” Smith told Observer. 

With many tech companies operating internationally, the U.S. could learn from the E.U.’s AI Act and how it pans out in the coming months, according to Rice. “That doesn’t mean that they have to identically mimic what the E.U. is doing,” she said, adding that having some level of standardization regarding definitions, interpretations and intentions could make a world of difference.

Читайте на 123ru.net