AI Experts Say Current Tech Still Can’t Match Human Intelligence
Recent comments from Demis Hassabis, the CEO of Google DeepMind, challenge the hype around AI reaching human-like intelligence. Hassabis, a Nobel laureate in AI research, recently spoke at the “All-In” podcast summit and pushed back against claims that current AI systems have achieved “PhD-level” smarts. He explained that while AI has made impressive strides, it still lacks the core reasoning skills that top human scientists possess. These scientists can draw connections across different fields and apply knowledge in creative ways—something AI hasn’t yet mastered.
Why the AI Hype Might Be Overblown
Hassabis pointed out that many companies and industry leaders are overestimating what AI can do today. Some competitors, especially OpenAI, have claimed that their latest models, like GPT-5, can match or even surpass human intelligence in specific areas. OpenAI’s CEO, Sam Altman, said GPT-5 could hold conversations with “PhD-level” expertise across any subject. They supported this claim by highlighting benchmarks and the model’s usefulness for real-world questions. But Hassabis warns these claims are exaggerated. The technology, he says, is still far from being truly intelligent in the way humans are, especially in reasoning and understanding.
The Limitations of Today’s AI Systems
Hassabis emphasized that current AI models, including Google DeepMind’s Gemini 2.5, are prone to mistakes. These models often hallucinate facts or get basic math wrong—errors that should be impossible if they truly had human-like reasoning. For example, AI chatbots might confidently give false answers or make simple errors in high school-level math. Hassabis argues that this reveals a significant gap in AI development. He predicts it will take another five to ten years and a few key breakthroughs to develop systems that can truly reason like humans.
Why AI Still Has a Long Way to Go
Experts agree that large language models are not yet capable of matching top human researchers. Andreas Vlachos, a machine learning professor at the University of Cambridge, explained that these models are primarily trained to predict the next word in a sentence, not to reason or solve complex problems. This core mismatch limits their ability to think critically or creatively. Hassabis believes that achieving true artificial general intelligence (AGI)—an AI that can think and learn like a human—remains a distant goal. He estimates it could be five to ten years before we see meaningful progress, and possibly more breakthroughs are needed.
In the meantime, the AI industry remains optimistic but cautious. While models continue to improve and impress with their language skills, experts like Hassabis remind us that we are still far from AI that can match human reasoning, understanding, and adaptability. As AI advances, it’s important to keep expectations grounded and recognize the significant hurdles still ahead before machines can truly think like us.












What do you think?
It is nice to know your opinion. Leave a comment.