Select Language:
Developers of artificial intelligence models should prioritize safety over racing to develop increasingly opaque, advanced technologies, according to Stuart Jonathan Russell, a prominent British computer scientist.
“I believe that focusing on AI safety is critically important,” the University of California, Berkeley professor stated during an interview at the three-day World Artificial Intelligence Conference in Shanghai, which wrapped up today.
“If we can move away from the arms race mentality—where everyone is racing to be the first to create superintelligent machines—we can shift our focus toward safety. It’s essential to maintain control over our AI systems so we can trust them to act securely for the long-term benefit of humanity. That’s the only technology with genuine value,” he emphasized.
Russell, who co-authored the textbook Artificial Intelligence: A Modern Approach, has long warned about the potential dangers of AI. In 2023, he joined thousands of other tech leaders—including Tesla CEO Elon Musk and Turing Award winners Geoffrey Hinton and Yoshua Bengio—in signing an open letter urging all AI research labs to pause the development of systems more powerful than GPT-4 for at least six months. The goal was to develop safety protocols that could prevent AI-driven catastrophic risks.
At the conference, Russell also expressed concern over the vast levels of current investment in AI. He warned that if the technology does not deliver rapid economic returns, it could lead to a bubble burst. He pointed out that if artificial general intelligence (AGI) reaches a stage where it can replace most forms of intellectual labor, many well-educated workers might face unemployment.
In the near term, society could struggle to adapt if AI diminishes the incentives that have supported social stability for centuries. However, in the long run, humanity might evolve and adopt new social frameworks and educational systems to cope with these changes.