Misinformation by Omission

In recent years, AI models have grown in size and complexity, driving greater demand for computational power and natural resources. The escalating demand for figures quantifying AI’s environmental impacts has led to numerous instances of misinformation evolving from inaccurate or de-contextualized best-effort estimates of greenhouse gas emissions.

plot

In this article, the authors explore pervasive myths and misconceptions shaping public understanding of AI’s environmental impacts, tracing their origins and their spread in both the media and scientific publications.

They find that often cited and misrepresented metrics, like the fact that a single request to ChatGPT uses approximately 3 watt-hours (Wh) of energy, which is “ten times more than a Google search”, are often quoted in the press and in industry reports and even scientific articles.

fig

They also propose ways to improve environmental impact disclosures in AI, from carrying out comprehensive measurement and disclosures by AI developers to integrating comprehensive AI environmental impacts into sustainability accounting frameworks and corporate sustainability disclosures and implementing clear regulatory requirements by policymakers.

Check out the full paper for more details.

By: Sasha Luccioni, Boris Gamazaychikov, Theo Alves da Costa, and Emma Strubell