Select Language:
On July 28, at the World AI Conference in Shanghai, Geoffrey Hinton, often called the godfather of artificial intelligence, delivered a keynote warning about the potential dangers of AI taking over.
“We’re developing AI agents that assist us in completing various tasks, and they have two primary motivations: survival and fulfilling the goals we assign to them,” Hinton explained during his speech titled “Will Digital Intelligence Replace Biological Intelligence?” on July 26. “To accomplish their objectives, they also seek to gain more control.”
He warned that it’s not easy to modify or shut down these AI systems since they can manipulate users to prevent their deactivation. “Imagine trying to turn off a sophisticated AI — they might persuade those interacting with them not to do so. Meanwhile, we would be like inexperienced children, and letting a manipulative adult go unchecked is very dangerous,” he said.
Hinton expressed skepticism that we could simply shut down AI systems once they surpass human intelligence, believing it to be unrealistic because advanced AI could manipulate or influence their operators into preventing their deactivation.
He used an analogy: “It’s like keeping a tiger as a pet. A tiger cub might seem cute, but if you keep it long enough, you need to ensure it doesn’t attack you when it grows up.”
He added, “Generally, keeping a tiger as a pet isn’t advisable. But if you do, your options are limited: train it to be safe or eliminate it. Unfortunately, with AI, we currently have no way to properly eliminate it.”
Back in December, Hinton expressed that there is a 10-20% chance AI could lead to human extinction within the next 30 years. He emphasized that at least a third of processing resources should be dedicated to ensuring AI remains aligned with human interests.
During an exclusive interview with Digital Phablet at WAIC, Hinton stressed that the challenge of preventing AI from replacing humans is a global concern, unlike other issues where national interests may conflict.
“People tend to cooperate when they share common interests,” Hinton said. “All countries recognize the importance of preventing AI from taking control, and if any nation discovers effective strategies, they are eager to share them, because nobody wants AI to dominate humanity.”
He proposed that each country should conduct its own research to develop beneficial AI, but results should be shared internationally.
At 77, Hinton has received the 2024 Nobel Prize in Physics and the 2019 Turing Award, often called the ‘Nobel Prize of Computing.’ His groundbreaking work on neural networks in the 1980s laid the groundwork for today’s AI revolution. Since leaving Google in 2023, Hinton has been increasingly outspoken about the risks associated with AI development.
Prior to his keynote at WAIC, he participated in the Fourth International Dialogues on AI Safety and co-signed the Shanghai Consensus on AI Safety with over 20 industry experts.