OpenAI has always been great at grabbing attention in the news. Its announcements often come with big, bold claims. For example, it announced GPT-2 but said it was too dangerous to release. Or consider last month’s “12 Days of Christmas” campaign, where it showcased a new product every day for 12 days.
Now, Sam Altman has shared his thoughts on the past year: “We now know how to build AGI as it’s usually understood.”
Artificial general intelligence means creating an AI that’s as smart and general as a human. Unlike narrow AI, which is built for specific tasks like translating languages, playing chess or recognizing faces, AGI can handle any intellectual task and adapt across different areas.
Is AGI Near? No, At least Not The AGI We (or Sam Altman) Imagine
I don’t think AGI is near. Today’s AI, like ChatGPT, works by recognizing patterns and making predictions — not by truly understanding. For example, completing the phrase “Life is like a box of…” with “chocolates” relies on probabilities, not reasoning. Try to draw a picture with a group of watches showing 3 minutes after 12, and all the watches will show 10 minutes after 10. Why? Because that’s how marketing pictures typically look. AGI is different. (Thanks, Ned Block!)
I don’t believe AGI will happen by 2025, and many experts agree. Demis Hassabis, with whom I overlapped during my time at Google, predicts AGI could arrive around 2035. Ray Kurzweil estimates 2032, and Jürgen Schmidhuber, director of IDSIA, suggests closer to 2050. The skeptics are many, and the timeline remains uncertain.
The AI Effect: AGI And the Moving Goalpost
Altman recently downplayed the “G” in AGI, saying, “My guess is we will hit AGI sooner than most people think, and it will matter much less.” Just on Sunday, he said it has “become a very sloppy term.” This sounds like moving the goalpost. Yes, if we don’t require the AI to be fully general, then it might arrive sooner.
Some of my colleagues joke that AGI is “what we haven’t yet built” because, as AI systems become capable of performing tasks once thought to require general intelligence, the definition of AGI often shifts. When TikTok’s algorithms recognized someone’s sexual orientation before they did, many were amazed. Today, we see this as machine learning and pattern recognition.
Sam Altman and Reasoning
When Altman says, “We now know how to build AGI,” he is probably referring to OpenAI’s o1. It is a model designed for reasoning through an iterative, self-calling process. The AI has two dedicated steps:
- Iteration and Reflection: The model generates an output, evaluates or critiques it and refines it in a new round of reasoning.
- Feedback Loop: This creates a feedback loop in which the model revisits its outputs, critiques them and improves them further.
In essence, GPT with o1 doesn’t just provide answers — it plans, critiques the plan and continuously improves it.
Is this enough? A well-designed prompt — meaning the human writes the instruction — can, so far, outpace OpenAI’s o1. That will certainly change and improve over time. It will also allow OpenAI to push the next generation of models. Note the paradigm shift. While OpenAI had previously focused on bigger models, and many of us wondered about scalability, they are now focused on “thinking longer” — meaning doing this type of inference loop.
Altman Vs. Salesforce, Microsoft, Google and Amazon
OpenAI will face serious competition in 2025 from Salesforce, Microsoft, Google and Amazon, all of which would like to dominate the AI agent market. These competitors have access to closed datasets and customer bases. Thus, OpenAI’s biggest hope is that it can be “best in class.” OpenAI is betting on its ability to outperform everyone else with superior technology. That’s why Altman confidently claims we’ll see AGI.