News in English

Beware Calls for AI Regultion

Late last May, OpenAI CEO Sam Altman testified before the Senate Judiciary Committee on the ascendant technology of generative AI. The putative motivation for Altman’s attendance at the Senate hearing was to allay congressmen’s concerns emanating from a combination of ignorance and Matrix-Terminator-Robocop dystopian fiction. The actual motivation was a combination of protectionist concerns for the technology’s negative distributional for domestic labor markets, the dissemination of misinformation, and outrage that technology outpacing the administrative state’s regulatory apparatus. 

Though congressional treatment of tech firms is often antagonistic, as exemplified by Senator Josh Hawley’s (R-Mo.) belligerent treatment of Google CEO Sundar Pichai, the relationship between corporations and the state is usually one of mutual parasitism—with consumers as the host organism. Given Altman’s supplications for regulation—including a recent call for an international AI regulatory agency—private meetings with Senators, and dinner with House members, it should surprise precisely nobody that he found “a friendly audience in the members of the subcommittee” for privacy, technology, and the law. 

Altman was practically courting policymakers for protection, albeit under the guise of concern for the commonweal. The former president of YCombinator has no excuse for such handwaving vagaries as “‘if this technology goes wrong, it can go quite wrong.’” If Altman has specific concerns that genuinely concern the public, he should have articulated them straightforwardly. 

Though OpenAI was founded as a nonprofit in 2015, it became a capped for-profit in 2019. There is nothing ipso facto wrong with changing to a for-profit model. OpenAI required massive amounts of capital to afford tens of thousands of H100 GPUs (~$40k per GPU), attracting talent, and tens of millions of dollars to train its large language model, ChatGPT. In order to afford these expenses, OpenAI needed to attract shareholders, talent—including from startups like Inflection and Adept—and strategic investments from corporate competitors the way all firms do: with the promise of higher discounted future returns. 

The result? A particularly user-friendly generative AI accessible for free to the public. 

Nevertheless, since OpenAI is in the (constrained) profit-maximizing business, it is subject to the perverse incentive to achieve rents—and returns for its shareholders—through regulatory capture.

Instead of maintaining  high profit margins through costly, relentless innovation and iterative improvements of ChatGPT, OpenAI can reduce the number of firms entering the market by government fiat.

The proposal advocated by Altman at the hearing? Per New York Times reporting, “an agency that issues licenses for the development of large-scale A.I. models, safety regulations and tests that A.I. models must pass before being released to the public.”

Read: Hurdles, obstacles, and barriers to entry. 

The capital investment makes market entry difficult; regulatory capture makes it virtually impossible. 

As Don Lavoie avers in National Economic Planning: What Is Left? (1985), central planning was “nothing more nor less than governmentally sanctioned moves by leaders of the major industries to insulate themselves from risk and the vicissitudes of market competition.”

Regulation is merely central planning’s less ambitious corollary: in the words of Lavoie, a means for the corporate elite “to use government power to protect their profits from the threat of rivals.”  For more on Don Lavoie and an incisive analysis of his contributions to the Knowledge Problem, we refer the reader to Cory Massimino’s piece for EconLib. 

In reality, OpenAI has adopted both of these strategies; it is profiting from the first-mover effect of its research and development efforts and, more recently, OpenAI has also entered a partnership with Apple to bundle ChatGPT with services like Siri, leveraging the incumbent firm’s pre-existing network of devices and apps. At the same time, OpenAI is attempting to maximize rents through regulatory capture. Though all strategies  reduce allocative efficiency, the first two are dynamically efficient while the latter is not; the two  increase total surplus and the third destroys it. 

One would think that the current Neo-Brandeisian FTC regime would sound the alarm about such an obvious bid to restrict market entry and facilitate collusion—the staff in the Bureau of Competition & Office of Technology even released a statement, “Generative AI Raises Competition Concerns.” Unsurprisingly, though nevertheless unfortunately, the regulators do not articulate a single concern about collusion aided and abetted by government intervention. 

Go figure!

As AI proceeds apace, Luddism increases, and Congress holds more hearings on regulation, we should regard the purported public lashings with a wary eye to the all-but inevitable regulatory capture to follow.

 


Samuel Crombie is the co-founder of actionbase.co and a former Product Manager at Microsoft AI.

Jack Nicastro is an Executive Producer with the Foundation for Economic Education and a research intern at the Cato Institute.

(0 COMMENTS)

Читайте на 123ru.net