At the end of May last year, OpenAI CEO Sam Altman testified before the Senate Judiciary Committee about the emerging technology of generative AI. The alleged motivation for Altman’s presence at the Senate hearing was to assuage congressional members’ concerns that stemmed from a combination of ignorance and dystopian Matrix-Terminator-Robocop fiction. The actual motivation was a combination of protectionist concerns about the technology’s negative spread in domestic labor markets, the spread of disinformation, and outrage that the technology is outpacing the regulatory apparatus of the administrative state.
Although Congress’s treatment of tech companies is often hostile, as evidenced by Senator Josh Hawley’s (R-Mo.) speech belligerent treatment According to Sundar Pichai, CEO of Google, the relationship between companies and the state is usually one of mutual parasitism – with consumers as host organisms. Seen Altman pleas for regulation-included a recent phone call for an international AI regulator – private meetings with senators and dinner with members of the House of Representatives – it should surprise no one that he “found a friendly audience in the members of the subcommittee” on Privacy, Technology and Law.
Altman practically tried to woo policymakers for protection, albeit under the guise of concern for the common good. The former president of YCombinator has no excuse for such fanciful whims as ‘if this technology goes wrong, it could go completely wrong.'” If Altman has specific concerns that really concern the public, he should have articulated them clearly.
Although OpenAI was founded as a non-profit organization in 2015, it became a limited for-profit organization in 2019. There is nothing ipso facto There is something wrong with the switch to a profit model. OpenAI needed huge amounts of capital to fund tens of thousands of H100 GPUs (~$40,000 per GPU) to attract talent, and tens of millions of dollars to train its large language model, ChatGPT. To pay for these costs, OpenAI had to attract shareholders, talent (including from startups like Inflection and Adept), and strategic investments from corporate competitors, as all companies do: with the promise of higher future returns.
The result? A particularly user-friendly generative AI accessible free to the public.
Nevertheless, because OpenAI is concerned with (limited) profit maximization, it is subject to the perverse incentive to earn revenues (and returns to its shareholders) through regulation.
Rather than maintaining high profit margins through costly, relentless innovation and iterative improvements to ChatGPT, OpenAI can reduce the number of companies entering the market by government fiat.
The proposal Altman advocated at the hearing? Per The New York Times reports this“an agency that licenses the development of large-scale AI models, security regulations, and tests that AI models must pass before being released to the public.”
Read: Obstacles, Obstacles and Access Barriers.
The capital investment makes market access difficult; regulatory capture makes it virtually impossible.
While Don Lavoie turns away National economic planning: what’s left? (1985), central planning was “nothing more or less than government-sanctioned steps by leaders of major industries to protect themselves from risks and the vicissitudes of market competition.”
Regulation is merely the less ambitious consequence of central planning: in Lavoie’s words, a means for corporate elites “to use the power of government to protect their profits from the threat of rivals.” For more information about Don Lavoie and a sharp analysis of his contributions to the Knowledge Problem, we refer the reader to Cory Massimino’s piece for EcoLib.
In reality, OpenAI has adopted both strategies; it is benefiting from the pioneering effect of its research and development efforts and more recently OpenAI has also partnered with Apple to bundle ChatGPT with services like Siri, leveraging the incumbent company’s already existing network of devices and apps. At the same time, OpenAI tries to maximize rental prices through regulation. Although all strategies reduce allocative efficiency, the first two are dynamically efficient, while the last is not; the two increase the total surplus and the third destroys it.
You would think that the current Neo-Brandeisian FTC regime would sound the alarm about such an obvious attempt to restrict market access and facilitate collusion – the staff of the Bureau of Competition & Office of Technology has even issued a statement: “Generative AI raises competitive concerns.” Not surprisingly, but unfortunately, regulators are not expressing any concern about it conspiracy aided and abetted by government intervention.
Go figure!
As AI advances at an ever-increasing pace, Luddism increases, and Congress holds more regulatory hearings, we should view the perceived public flogging with suspicion for the all-but-inevitable regulation that will follow.
Samuel Crombie is the co-founder of actionbase.co and a former product manager at Microsoft AI.
Jack Nicastro is an executive producer at the Foundation for Economic Education and a research intern at the Cato Institute.
(0 COMMENTS)