Select Language:
In a recent interview, renowned AI researcher Hasabis shed light on the future of artificial general intelligence (AGI), emphasizing that simply enlarging the contextual window isn’t enough. He argued that achieving true AGI requires the development of systems capable of continuous learning and maintaining persistent memory.
Hasabis pointed out that current models largely depend on expanding their ability to process larger chunks of data within a single context. While this approach has led to impressive advancements in narrow AI tasks, it falls short in enabling machines to truly understand and adapt like humans do. “A focus on just increasing the context window is akin to giving a student a bigger textbook, but not necessarily helping them retain or apply the knowledge over time,” he explained.
He emphasized that the next significant step in AI evolution involves integrating mechanisms that allow models to learn continuously from new experiences, much like how humans accumulate knowledge over years. This would require building systems with sustainable memory frameworks that can store, recall, and update information dynamically, rather than relying solely on static training data.
The researcher underscored that such memory-centric architectures are essential to bridge the gap between narrow AI capabilities and the flexible, adaptable intelligence seen in humans. “We need to move beyond the current paradigm, which is largely reactive and limited to a fixed context, toward one that fosters ongoing learning and persistent memory,” said Hasabis.
In conclusion, the expert’s insights suggest that the journey toward genuine AGI hinges not only on computational power but on innovative approaches to learning and memory. He remains optimistic that breakthroughs in these areas will eventually unlock the full potential of artificial intelligence, paving the way for machines that can think, learn, and adapt endlessly, just like humans.


