Sustainability and AI

Have you ever heard that each LLM prompt you write uses a bottle of water?

Well, thankfully for the planet, it doesn’t really. But environmental concerns are still an important part of your organization’s impact. In this post, we will review the environmental harms of LLM use, how we can reduce it, and how we can use LLMs to make our organizations more environmentally friendly.

Environmental Impact of LLMs

Day-to-day LLM use isn’t the main culprit. In terms of electricity or water consumption, a single LLM query is in the ballpark of a Google search or printing a page—and is dwarfed by routine office activities like video calls or making coffee. Forget traveling into the office: a 10-mile commute in a hybrid car blows all of these activities out of the water.

Beware viral myths. The “one query = one bottle of water” meme misreads the source; a more accurate average is ~29.6 queries per bottle, and newer models are notably more efficient than the GPT-3 era the paper analyzed. (Math available here).

Where the real impact hides:

  • Training. Although it’s spread out over many, many queries, the bulk of water and electricity use estimates for LLMs is associated with training. Reducing the number of queries you do will not change this number.

  • Non-LLM AI and other cloud technologies. Resource use is a problem of data centers, which are not exclusive to LLMs. If we are going to try to reduce the environmental footprint of our technology, we need to zoom way out. Speaking of Zoom, your video calls probably use more environmental resources than your LLM use does!

  • Always-on systems (AI systems that analyze in real-time) consume energy around the clock. If you have fleet tracking, security, investment dashboards, or any other always-on system, consider whether less frequent updating would meet your needs (e.g. updating hourly during work hours, or whenever someone makes a request)

How to reduce the environmental harm of your LLM use

Start systemic:

  • Ask current and prospective vendors about their environmental practices. Data centers can be powered by renewable energy sources and cooled by re-used or non-potable water.

  • Adovcate with elected officials for improved environmental reporting and stricter regulations could reduce the negative impacts of data centers.

 

Your AI use:

  • Start new chats early and often. LLMs seem to send the entire context window back through the model each time you ask a new prompt. Starting a new chat with a summary of the previous context, rather than its entirety, can get you most or all of the benefit while using less resources.

  • Use the smallest model that works for your purposes. Models with “mini” or “nano” in the name are smaller and use less computing power than full-sized models. “Thinking” or “deep research” use more. Smaller models are not as good at complex tasks, though—a “thinking” model (e.g. ChatGPT5 Thinking or Claude Opus) may save you several prompts and false starts if your task is complicated.

  • Consider your comparison. If your particularly long LLM prompt used more than a Google search, but would have taken a dozen searches to accomplish, you’ve saved resources.  

How AI can lower your footprint while advancing the mission

  • Operational efficiency: Smart building AI can trim facility energy by ~10–20% via predictive controls and automation.

  • Replace higher-emission activities: Better remote collaboration and equipment diagnostics can reduce travel; document AI reduces paper and physical handling; predictive maintenance extends equipment life.

  • Lightweight environmental inventory: You might not have the resources to do a thorough audit of your organization’s environmental practices, but you might be able to give an LLM enough context to suggest high-impact changes you can make to reduce your organization’s footprint considering your activities, mission, priorities, and staff capacity.

Quick tip: Have an LLM interview you about your job, department, or organization, for example by asking, “I work in [context] and I am interested in reducing the carbon footprint of our work. Please ask me questions one at a time that will help you identify high-impact changes we can make to improve our environmental impact.” This will help you get out of your own perspective and might give you some new ideas for activities you can eliminate or change to reduce your impact with less effort.

Practical guardrails for mission-driven teams

  1. Do consider environmental impact before implementing new AI systems
    Consider lifecycle (hardware, storage), usage patterns (on-demand vs. continuous), and how impact scales across the organization. Compare the AI approach against the non-AI alternative you’d otherwise use, and how large the value add is.

  2. Right-size and de-duplicate
    Avoid over-engineering (using heavy models when smaller models or simpler automation automation would do), consolidate overlapping tools, and prefer intermittent updates when “always-on” isn’t essential.

  3. Pick vendors like you pick partners
    Ask about renewable energy use, data center cooling, offsets, and transparency. When considering a new AI implementation, consider solutions from vendors you already work with: fewer vendors often means less redundant infrastructure.

  4. Mind the optics and be transparent
    Stakeholders will notice visible waste, mission misalignment, and silence. Communicate about your decisions, how they were made, and their impact.

  5. Budget correctly
    Include environmental impact in your total cost of ownership; review quarterly for “environmental backfires” if use creeps up over time.

Strategy note: aim your effort where it counts

For most orgs, AI is a small slice of the total environmental pie; prioritize your largest impacts first and keep mission alignment front-and-center. The goal isn’t “no AI”—it’s net positive impact.

Want the numbers and citations?

I break down the energy/water math and the “bottle of water” meme here: Ethics & LLMs: Sustainability

For educators and ed-tech leaders, I discuss classroom-specific trade-offs in my chapter for The Educators’ 2026 AI Guide.

Disclsoure: For this post, I fed a “Thinking” LLM the detailed post from my blog, the URL for this blog, and requested a high-level look at LLMs and sustainability that cites the original, more detailed post :)

Next
Next

From Policy to Practice: An AI Roadmap