Select Language:

The promise of artificial intelligence is vast, but its significant energy use must be addressed. A recent UNESCO study suggests that asking shorter questions can be one effective approach to reducing energy consumption.
The report, released in conjunction with the AI for Good global summit in Geneva, claims that by adopting more concise queries and employing specialized models, energy use in AI could be cut by as much as 90% without diminishing performance.
OpenAI’s CEO, Sam Altman, recently disclosed that each interaction with their popular generative AI tool, ChatGPT, uses about 0.34 Wh of electricity. This is considerably more—between 10 and 70 times—than a typical Google search.
With ChatGPT handling roughly a billion requests daily, this results in an annual energy consumption of about 310 GWh, equivalent to the yearly energy needs of three million people in a country like Ethiopia.
Furthermore, UNESCO found that the energy demands of AI are doubling every 100 days, as generative tools become more integrated into our daily routines.
“The rapid increase in computational power needed for these models is putting enormous pressure on global energy systems, water supply, and essential minerals, raising concerns about environmental sustainability, fair access, and competition for dwindling resources,” the UNESCO report cautioned.
However, significant progress—almost a 90% reduction in energy usage—has been achieved by shortening the queries and utilizing smaller AI models, all while maintaining performance levels.
Many AI systems, such as ChatGPT, are designed to be versatile, capable of addressing a wide range of topics. This means they often have to sort through a vast database of information to generate and assess responses.
Smaller, specialized AI models can substantially reduce the energy required to produce a response, which was also achieved by reducing prompt lengths from 300 to 150 words.
Recognizing the energy challenge, major tech companies now offer smaller-scale versions of their extensive language models. For example, Google has introduced Gemma, Microsoft has Phi-3, and OpenAI offers GPT-4o mini. French companies, like Mistral AI, have also developed similar models such as Ministral.