OpenAI has launched its latest artificial intelligence model, the o1 series, which the company claims has human-like reasoning capabilities.
In a recent blog post, ChatGPT’s creator explained that the new model spends more time thinking before responding to questions, allowing it to tackle complex tasks and solve more difficult problems in areas like science, coding, and math.
The o1 series is designed to simulate a more conscious thought process, refine the strategies and recognize mistakes, just like a human would. OpenAI Chief Technology Officer Mira Murati described the new model as a significant leap in AI capabilities and predicted it will fundamentally change the way people interact with these systems. “We will see a deeper form of collaboration with technology, similar to a back-and-forth conversation that supports reasoning,” Murati said.
While existing AI models are known for their fast, intuitive responses, the o1 series introduces a slower, more thoughtful approach to reasoning, resembling human cognitive processes. Murati expects the model will spur progress in areas such as science, healthcare and education, where it can help explore complex ethical and philosophical dilemmas, as well as abstract reasoning.
Mark Chen, vice president of research at OpenAI, noted that early testing by programmers, economists, hospital researchers and quantum physicists has shown that the o1 series performs better at problem solving than previous AI models. According to Chen, an economics professor noted that the model could probably solve a PhD-level exam question “better than any student.”
However, the new model does have limitations: the knowledge base only extends until October 2023 and it currently lacks the ability to browse the web or upload files and images.
The launch comes amid reports that OpenAI is in talks to raise $6.5 billion at a staggering $150 billion valuation, with potential backing from major players like Apple, Nvidia and Microsoft, according to Bloomberg News. This valuation would put OpenAI well ahead of its competitors, including Anthropic, recently valued at $18 billion, and Elon Musk’s xAI at $24 billion.
The rapid development of advanced generative AI has raised security concerns among governments and technologists regarding the broader societal implications. OpenAI itself has faced internal criticism for prioritizing commercial interests over its original mission to develop AI for the benefit of humanity. Last year, CEO Sam Altman was temporarily ousted by the board over concerns that the company was drifting from its original goals, an event internally referred to as “the blip.”
In addition, several safety executives, including Jan Leike, have left the company, citing a shift in focus from safety to commercialization. Leike warned that “building machines that are smarter than human machines is an inherently dangerous endeavor,” and expressed concern that the security culture at OpenAI had been sidelined.
In response to these criticisms, OpenAI has announced a new safety training approach for the o1 series, utilizing its enhanced reasoning capabilities to ensure compliance with safety and alignment guidelines. The company has also signed agreements with AI safety institutes in the US and Britain, giving them early access to research versions of the model to strengthen joint efforts to protect AI development.
As OpenAI moves forward with its latest innovations, the company aims to balance the pursuit of technological advancement with a renewed commitment to security and ethical considerations in the deployment of AI.