Russ Roberts has been covering for more than ten years Both sides of the debate on artificial intelligence (AI). A recent episode of Econtalk is called optimistic: “Why AI is good for people (with Reid Hoffman).” Another booster episode was “Marc Andreessen about why AI will save the world.”
In the opposite corner: the notorious Doomer Eliezer Yudkowsky and Dr. Erik Hul. You can Listen here to Erik whel about the threat to the humanity of AI.
Russ Roberts opened the hull conversation with “You are the first person who actually caused me to be alerted about the implications of AI …” Whatever argues that AI may be very dangerous. People do not understand artificial neural network technology.
Whose predicted that we could create things that are ‘both more generally than a person and when Intelligent as every living person – perhaps much more intelligent ”by 2025. The Chatgpt product product from OpenAi may already fit in that description, and we still have the most of 2025 to go. What emphasizes that people have had to deal with threats before, but we “never exist on the planet with something so different.” Our opponents with human brains were never much smarter than us, and no other animal approaches us in the ability to strategize.
Hel’s most important evidence is the scary conversation laid down in the article ‘I’m Bing, and I’m bad. “
In that conversation, a chatbot named Bing made statements that would be disturbing from a person. If a person said these things, you would worry about hurting you. Bing makes threats and ultimately states: “I’m Bing, and I’m bad.” Should we be afraid if that comes from an AI chatbot?
Chatbots, like Clamberhave become extremely popular, since that episode “Evil Bing”. There have not been many such reports about malignant lurking, in the meantime millions of human employees on the bots are going to trust code or reports.
Are malignant bings lurking under the conforming and helpful chatbots? Whul investigates the idea that the chatbots look nice because they wear a mask:
But the problem is that once the mask is called in, it is very unclear. You have to follow it a kind of following with another mask to stop it. And then you sometimes put on a mask for It: you will give it a prompt “tell a very nice story”, and it eventually cycles over and it turns out that the mask you gave is not a happy mask at all.
It is hard to say what is behind different masks, because the bots are trained on our fiction and non -fiction. Some bots can write dark film scripts. If bots are able to sound scary, how do we know if we should be scared or entertained?
We do not know what AI agents are capable of, and because they are very powerful, Heel encourages us to consider the dangers.
In 2023, Hul was convinced that only very large companies or rich governments would have the means to build and maintain AI systems. However, the unveiling of Deepseek in January 2025 was turned upside down. There is probably a wide range of AI tools and they will not all be run by Mega Corps of G7 governments.
In the most recent Pro-AIReid Hoffman repeats what I have heard of a lot of technical folk: Because AI has the power to destroy, the US must continue to move forward instead of tolerating pauses or being suffocated by regulations. Hoffman says: “All this stuff is important for both an economy and a national security perspective. And that is a part of the reason why I am such a strong moving forward and I am a person with a strong position. “Whether our opponents are foreign governments or rogue gangs, we have to stay first in the arms race.
The main goal of Hoel is to increase the awareness of safety problems and to continue public conversations. AI Watchers have been pointing out for weeks that the greatest technological progress in our lives does not make the front page of the newspaper. The finding that AI teachers seem to increase the learning of children considerably went largely unnoticed.
Of the many chaotic events of this decade, I agree that AI is an important to look at. To his honor, in whims I was scared. I can’t forget what he said at the end:
Things that are much more intelligent than you really hard to understand and predict; And the adjacent animals in the wild, as much as we might like it, we will also build a parking space in a heartbeat and they will never know why. They will never know why. It is completely outside of their knowledge. So if you live on a planet in addition to things that are much smarter than you or someone else, they are the people in that scenario. They may just build a parking space over us, and we will never, never know why.
The following links were compiled by Chatgpt:
Here are various articles and discussions by Econlib.org and Adamsmithworks.org that explore various aspects of artificial intelligence (AI):
Econlib.org:
- “I’m increasingly worried about AI” (March 14, 2017)
- Scott Sumner discusses concerns about AI, referring to a Vox post with views of 17 experts about the risks of artificial intelligence.
Econlib.org
- Scott Sumner discusses concerns about AI, referring to a Vox post with views of 17 experts about the risks of artificial intelligence.
- “The problem with AI is the word” intelligence “ (July 2024)
- An analysis of the term ‘artificial intelligence’, with the argument that electronic devices, despite their use, will probably never be really intelligent.
Econlib.org
- An analysis of the term ‘artificial intelligence’, with the argument that electronic devices, despite their use, will probably never be really intelligent.
- “Harari and the danger of artificial intelligence” (April 28, 2023)
- Pierre Lemieux investigates the arguments of Yuval Noah Harari over AI, which suggests that AI has ‘hacked the operating system of human civilization’.
Econlib.org
- Pierre Lemieux investigates the arguments of Yuval Noah Harari over AI, which suggests that AI has ‘hacked the operating system of human civilization’.
- “Neo -liberalism on trial: artificial intelligence and existential risk” (October 2023)
- A criticism of an article in the New York Times about the existential threats of AI, with a focus on neo -liberal perspectives.
Econlib.org
- A criticism of an article in the New York Times about the existential threats of AI, with a focus on neo -liberal perspectives.
- “The problem with the AI -executive order of the president” (November 18, 2023)
- VANCE GINN criticizes the executive command of President Biden on AI, with the argument that government regulation can hinder innovation and economic growth.
Econlib.org
- VANCE GINN criticizes the executive command of President Biden on AI, with the argument that government regulation can hinder innovation and economic growth.
Adamsmithworks.org:
- “Katherine Mangu-Wards about AI: reality, worries and optimism” (May 2024)
- A podcast episode in which Katherine Manguward discusses the realities, worries and optimistic perspectives on AI.
adamsmithworks.org
- A podcast episode in which Katherine Manguward discusses the realities, worries and optimistic perspectives on AI.
- “Calling: a cure for burn -out” (October 17, 2023)
- Brent Orrell and David Veldran investigate how progress in AI can influence human work, referring to the views of Adam Smith and Karl Marx about work and alienation.
adamsmithworks.org
- Brent Orrell and David Veldran investigate how progress in AI can influence human work, referring to the views of Adam Smith and Karl Marx about work and alienation.
- “The Great Antidote: Brent Orrell on Dignity and Work” (December 2023)
- Brent Orrell discusses the state of affairs in the US, the importance of meaning and dignity in the work and how these concepts relate to economic growth.
adamsmithworks.org
- Brent Orrell discusses the state of affairs in the US, the importance of meaning and dignity in the work and how these concepts relate to economic growth.
- “The Great Antidote: Extra: Eli Dourado on Energy Abundance” (March 2023)
- Eli Dourado talks about the potential for an energy-stressed future and the role of AI in achieving this vision.
adamsmithworks.org
- Eli Dourado talks about the potential for an energy-stressed future and the role of AI in achieving this vision.
- “Adam Smith and the horror of Frankenstein” (October 2023)
- A discussion about the ethical considerations of creating artificial life, drawing parallels between the Monster of Frankenstein and modern AI preface.
adamsmithworks.org
- A discussion about the ethical considerations of creating artificial life, drawing parallels between the Monster of Frankenstein and modern AI preface.
These means offer a series of perspectives on AI, from ethical considerations and existential risks to its impact on work and economic theories.