Artificial intelligence produces emissions based on how much it “thinks”

We know that whatever we ask artificial intelligence (AI), it will give us an answer. But some suggestions it provides could have a more significant environmental impact and cause higher carbon dioxide emissions. This was reported today by a team of researchers led by the Hochschule München University of Applied Sciences in Germany, who measured and compared the carbon dioxide emissions of several Large Language Models (Llm) already trained using a series of standardized questions. The study was published in the pages of the journal Frontiers in Communication .
“Thinking” Artificial IntelligenceAlthough many of us are not fully aware of it, many of these technologies are associated with a high environmental impact. In fact, to produce answers, artificial intelligence uses tokens, words or parts of words that are converted into a string of numbers that can be processed by LLMs, artificial intelligence models specifically designed to generate human language. However, this conversion, as well as other processing, produces carbon dioxide emissions. "The environmental impact of the questions asked to trained LLMs is strongly determined by their reasoning approach, with processes that significantly increase energy consumption and carbon emissions," said the first author of the paper Maximilian Dauner. "We found that reasoning-based models produced up to 50 times more CO2 emissions than simple-answer models."
The cause of most emissionsTo reach this conclusion, the team tested 14 LLMs, with between 7 billion and 72 billion parameters (which determine how they learn and process information), on 1,000 standardized questions across a variety of subjects. The reasoning models, on average, created 543.5 “thinking” tokens (additional tokens that the reasoning LLMs generate before producing an answer) per question, while the simple models only needed 37.7. The most accurate was the Cogito model, with 70 billion parameters, which achieved 84.9% accuracy, but also produced three times more CO2 emissions than similarly sized models that generated simple answers.
“Currently, we see a clear trade-off between accuracy and sustainability inherent in LLM technologies,” Dauner commented. “None of the models that kept emissions below 500 grams of CO2 equivalent (a unit of measurement of the climate impact of various greenhouse gases, ed.) achieved accuracy above 80% in answering the thousand questions correctly.” In addition, across subjects, questions that required lengthy reasoning processes, such as algebra or philosophy, led to emissions up to 6 times higher than simpler questions, such as high school history .
A more conscious useThe findings of the new study, therefore, suggest the need to make more informed decisions about the use of AI. "Users can significantly reduce emissions by encouraging AI to generate simple responses or by limiting the use of high-capacity models to tasks that actually require that power," the expert advised, concluding that if we knew the true cost in terms of emissions generated by artificial intelligence, such as creating a personalized action figure , we could be more selective and careful about when and how to use these technologies.
La Repubblica