I remember a professor suggesting in graduate school that the revolution in rational expectations would eventually lead to much better models of the macroeconomy. I was skeptical and in my opinion that didn’t happen.
This isn’t because there’s anything wrong with the rational expectations approach to macros, which I’m a big believer in. Rather, I believe that the progress resulting from this theoretical innovation has occurred very quickly. For example, by the time I was having this discussion (around 1979), people like John Taylor and Stanley Fischer had already grafted rational expectations onto persistent wage and price models, contributing to the New Keynesian revolution. Since then, macro seems to have been stuck in a rut (apart from some later innovations from the Princeton School (related to the zero lower bound issue.)
In my opinion, the most useful applications of a new conceptual approach are emerging quickly in highly competitive fields such as economics, science and the arts.
In recent years I have had a number of interesting conversations with young people involved in artificial intelligence. These people know a lot more about AI than I do, so I would encourage readers to take the following with more than a grain of salt. During the discussions, I sometimes expressed skepticism about the future pace of improvements in major language models like ChatGPT. My argument was that exposing LLMs to additional data sets came with some pretty serious diminishing returns.
Consider someone who has read and understood ten carefully selected books on economics, perhaps a text on macro and micro principles, as well as some intermediate and advanced textbooks. If you were to fully absorb this material, you would actually know quite a bit of economics. Now let them read 100 well-chosen textbooks. How much more economics would they actually know? Certainly not 10 times as much. In fact, I doubt they would even know twice as much economics. I suspect the same could be said for other fields, such as biochemistry or accounting.
This Bloomberg article I noticed:
Open AI was on the threshold of a milestone. The startup completed an initial round of training in September for a massive new artificial intelligence model that they hoped would significantly outperform previous versions of the model. technology behind ChatGPT and get closer to its goal: powerful AI that outperforms humans. But the model, known internally as Orion, fell short of the company’s desired performance. Indeed, Orion fell short in answering coding questions on which it had not been trained. And OpenAI is not alone recently encountered stumbling blocks. After years of bringing increasingly sophisticated AI products to market, three of the leading AI companies are now in the market see diminishing returns of their hugely expensive efforts to build newer models.
Please don’t take this as a sign that I’m an AI skeptic. I believe that the recent progress in LLMs is extremely impressive, and that AI will ultimately transform the economy in some profound ways. Rather, my point is that progress toward some kind of supergeneral intelligence could happen more slowly than some proponents expect.
Why would I be wrong? I’m told that artificial intelligence can be boosted by methods other than just exposing the models to increasingly large data sets, and that the so-called “data wall‘ can be overcome by other methods of increasing intelligence. But if Bloomberg is right, LLM development is in a bit of a slump due to the strength of diminishing returns from having more data.
Is this good news or bad news? It depends on how much weight you place on the risks associated with the development of ASI (artificial super intelligence).