AI’s Growing Role in the Climate Crisis
The numbers are hard to ignore. Generative AI models require massive amounts of electricity and computing power. A joint study by Carnegie Mellon and Hugging Face estimates that energy use in data centers has jumped by 20% to 40% in recent years, accounting for up to 1.3% of global electricity demand. Even Bloom, a language model designed to be “energy-efficient”, generated 24.7 tons of CO₂ during its final training phase. That’s roughly equivalent to 25 round-trip flights between Paris and New York.
And according to data scientist Alex de Vries, by 2027, AI servers could consume as much electricity as an entire country like Sweden.
In its August 2024 report, Impact AI warned that AI is placing increasing strain on power grids, data centers, and natural resources—especially water. Researchers at Cornell University project that AI-related water consumption could reach between 4.2 and 6.6 billion cubic meters by 2027.
Carbon Offsetting: Necessary Solution or Convenient Excuse?
In response, many companies are turning to carbon offsetting: investing in environmental projects to compensate for their emissions. These projects might include reforestation, ecosystem preservation, or carbon capture technologies.
Collective initiatives like Impact AI promote a more responsible model—one that includes tracking emissions, reducing them, and only then offsetting what remains. But relying solely on offsets has become increasingly controversial. As Deloitte puts it,: planting trees should never be a license to pollute.
The problem is, not all offset projects are created equal. There are ongoing concerns about durability, double-counting, and the accuracy of emission measurements.
Structural Limits and the Need for More Accountability
Experts at Impact AI are clear: carbon offsetting should be a last resort, after every possible reduction effort has been made. And those reductions need to be planned early—ideally during the initial project design phase. According to the Responsible AI Briefs, companies should take a holistic approach that incorporates environmental concerns into governance, use case development, and team training.
Their roadmap, published in August 2024, includes several recommendations:
- Reuse existing models rather than training new ones from scratch.
- Explore non-AI alternatives when appropriate.
- Educate teams on low-impact, “frugal” AI practices.
- Measure environmental impact across all three emission scopes:
- Scope 1: Direct emissions
- Scope 2: Indirect emissions from purchased energy
- Scope 3: Extended indirect emissions, including hardware manufacturing and end-of-life disposal
Standards are beginning to take shape—for example, AFNOR SPEC 2314, developed collaboratively by ADEME, ARCEP, HUB France IA, and Impact AI members, guides energy-efficient AI.
Companies Putting Principles into Practice
Some companies are already walking the talk. Schneider Electric, for instance, uses a rigorous methodology to compare carbon savings versus consumption in its AI deployments. Their results? On average, for every 100 units of carbon avoided, just 5 are consumed by AI.
Crédit Agricole has earned both the LNE certification and the LabelIA Labs seal by integrating environmental and social responsibility into the design phase of its AI projects. Their AI team filters projects based on both business potential and environmental impact, deliberately limiting dataset sizes, optimizing inference, and avoiding oversized models when simpler solutions will do.
Beyond Offsetting: Building a Leaner AI Model
Real change doesn’t come from offsetting; it comes from reducing environmental impact at the source and rethinking how AI is used. At Diabolocom, this principle is already shaping the way AI is designed and deployed. The company focuses on developing lean, purpose-driven models rather than training oversized ones with unnecessary capacity. It prioritizes partners that run on renewable energy, minimizes redundant processing, and optimizes every stage of the AI project lifecycle.
This commitment isn’t just about green branding—it’s about proving that operational performance and environmental responsibility can go hand in hand.
Conclusion: Ethics, Accountability, and Transparency
Carbon offsetting can play a helpful role—but it must not distract from the real priority: reducing AI’s footprint at its source. That requires tracking, measuring, and sometimes saying no to energy-intensive approaches, even if they promise short-term gains.
Responsible AI isn’t about appearing sustainable—it’s about building ethical, transparent, and collectively accountable strategies. Organizations like Impact AI, Schneider Electric, Crédit Agricole, and Diabolocom are showing that it’s possible to lead with innovation and responsibility.
Discover our AI for call centers