Skip navigation
Lines of multicolored code on a black screen
Feature

Q&A with SEI’s William Babis: the promise and peril of AI

part of AI and SEI

Start reading
Feature

Q&A with SEI’s William Babis: the promise and peril of AI

On 4 June, the National Academy of Sciences hosted the Artificial Intelligence for Sustainability workshop, where SEI Scientist William Babis presented on the impact of AI on supply chains. Here, he discusses the potential environmental benefits and risks of AI.

William Babis / Published on 23 July 2025

UN secretary general Antonio Guterres called for lower-carbon data center development in his major 22 July address on climate change as the 2025 United Nations Climate Change Conference summit draws closer.

As AI technologies continue to grow and improve, it has immense potential to drive critical innovations in global decarbonization efforts. However, these technologies also introduce significant social, environmental, and climate-relevant risks. From 2020 to 2024, the total carbon footprint of AI systems reached up to 100 million tonnes of carbon dioxide equivalents per year, or roughly the annual emissions of the Czech Republic, a country of 10.7 million people. Meanwhile, energy intensity of individual queries continues to increase, as does the water use of the data centres that power AI.

What are some of the aspects of AI’s supply chain that we need to be concerned about?

I would say the conversation must evolve from, “Is AI good or bad?” to, “Where are the risks and how can we mitigate them?” because there are certainly risks. AI’s carbon footprint is small relative to global greenhouse gas (GHG) emissions but it’s growing at an alarming rate. Users, governments, academia, and civil society are not really doing enough to hold companies accountable. Major AI companies are withholding emissions and energy-use data. Many are no longer on track to meet their decarbonization targets. We are all well-aware of the significant water risks and yet massive AI data centers continue to be built in water-scarce regions. The mining of materials, manufacturing of AI infrastructure, and disposal of e-waste all introduce social and environmental impacts on lower-income countries. We can and should develop an ethical framework for AI companies and refuse to use tools that don’t comply. There are significant changes AI companies can make to reduce their carbon, ecological, and social impact while still rapidly innovating and providing meaningful benefit to people around the world.

At the Artificial Intelligence for Sustainability workshop held by the National Academy of Sciences, you spoke about how AI converges with supply chain issues. How can AI help address these issues and what relevant tools has SEI built?

Generally, AI is being used to build a global map of every step of the supply chain for every agricultural, consumer and industrial product. In many cases, there is a dearth of accessible data; AI can be used to build predictive algorithms that can identify where materials are being sourced from along with the social and environmental conditions of these locations.
In some cases, the relevant supply chain data exists, but it is difficult to access. Furthermore, building a global map of supply chains requires data in a consistent and comparable format. Given the wide variety of products and geographies sourcing those products, the data is rarely cleaned and ready to be queried as a whole. AI can help aggregate these diverse data sources into collective knowledge platforms to make this information accessible and useful to governments, companies, and consumers who want to improve the sustainability of their supply chains. These global maps can also help stakeholders identify more sustainable sourcing options.

There are several AI-powered projects at SEI contributing to these efforts. One project, Trase, maps the quantity of major agricultural commodities produced around the world at the national, subnational and municipal level while tracking their contributions to deforestation. The underlying data is increasingly derived with the help of AI. For example, AI uses satellite imagery to identify new soy production facilities or recently deforested lands. AI can also read recent company announcements to determine the location of new large-scale corn production.

Another SEI project, LeadIT, tracks green steel projects around the world and other innovations in sustainable industrial activities. Their data platforms now incorporate an AI-powered newsfeed processing tool to find new relevant green steel projects and feed the necessary information into their database.

We are very intentional about how these tools are used to ensure the proper amount of human oversight and sufficient accuracy.

As a research organization, how is SEI’s approach to AI unique?

Although we use commercial AI engines like Google’s Gemini and OpenAI’s ChatGPT, our tools are particularly focused on mitigating the inherent biases and inaccuracies that these models are plagued with. These AI tools present a great opportunity to enhance the productivity of our researchers and amplify our positive impact.

However, AI tools, when used improperly, can also compromise the integrity of our work. We are very intentional about how these tools are used to ensure the proper amount of human oversight and sufficient accuracy. In many cases, we’ve found that our AI tools have been more accurate with long and tedious research tasks than researchers have been.

Aside from the AI-powered tools that SEI has built, our organization has developed several publications exploring the ethical implications of AI and how it can be used responsibly to advance our mission.

How can companies and governments be held accountable for the social and environmental impact of AI?

It’s worth emphasizing that there’s so much uncertainty right now about the impact of AI. However, there doesn’t need to be. Companies are almost certainly already tracking AI’s energy, water, carbon and hardware footprint. The fact that they aren’t sharing this data in their annual sustainability reporting is quite alarming to me.

It’s clear, nonetheless, that the energy demand from AI data centres is rapidly increasing. One of the most commonly cited existing estimates suggests that AI made up around 4.4% of total US electricity consumption in 2023 – which, in AI years, is quite long ago – and will triple to 12% by 2030. For the first time in decades, US electricity demand is rising and much of this added demand comes from new data centres. This risks making our already-challenging task of decarbonizing the energy grid that much harder.

That said, estimates suggest that data centres generally – including AI use – still only account for 1.5% of global electricity consumption and even less of the world’s GHG emissions. Many proponents of AI presume that AI can lead to innovation and efficiency gains across the economy that would lead to a far greater decrease in global GHG emissions.

However, the growth rates are hard to just look away from. This industry is in its infancy. ChatGPT has been a household name for less than three years. However, AI-related emissions already account for 1% of global electricity consumption. All of the estimates are based, at least in part, on historical data. But we’ve already seen the energy footprint of AI evolve rapidly. Estimates of AI’s greenhouse gas emissions from sourcing and manufacturing its hardware shifted from around half of AI’s overall emissions last year to closer to 10% this year. This is due, in large part, to the exponentially increasing complexity of training models and, in more recent innovations, the time an AI model spends “thinking” about a problem (known as inferencing a model).

Meanwhile, the relative energy use of this inferencing phase appears to be rapidly increasing. The unforeseen nature of AI innovation makes it very likely that existing forecasts are severely underestimating the energy and carbon footprint of AI. This only reinforces the importance of establishing data and algorithmic transparency as a norm across the AI industry rather than the exception.

Aside from data transparency, there are many meaningful actions AI companies can take to minimize their negative social and environmental impacts. First, data centres should curb their demand for electricity at times when local energy grids are at peak demand. If data centers reduce their electricity demand during only 1% of the hours throughout the year when the energy grid approaches peak capacity, data centers can significantly minimize their strain on local grids with negligible impact on the service they provide for their users.

As discussed before, data centres are water-intensive facilities, requiring water to keep servers and surrounding equipment cool. Nonetheless, they continue to be placed in arid regions where water scarcity risks are high. This threatens water accessibility for neighboring households and farmers. In the absence of political or consumer pressure, AI companies will likely continue to exhibit negligence in the siting of new data centers.

Additionally, the more general an AI model is, the less energy-efficient it tends to be. Likewise, a smaller and more specialized AI model is more efficient. Replacing the current general AIs with a paradigm of more specialized AI tools catered to specific user groups would improve energy efficiency.

Replacing the current general AIs with a paradigm of more specialized AI tools catered to specific user groups would improve energy efficiency.

AI also influences international relations and global trade relationships. Can you tell us a little bit about how it can disrupt the international order?

As with any revolutionary technology, the geopolitical implications are significant. There is undeniably immense potential for AI to improve livelihoods in lower-income countries. However, AI also risks exacerbating existing inequalities, entrenching our reliance on fossil fuels, and perpetuating existing power structures.

Data centres are concentrated in high-income countries and largely trained on data from high-income countries. Meanwhile, the mined hardware and disposal of e-waste typically takes place in lower-income countries.

The profits from these technologies are, of course, geographically concentrated, as well. According to the International Energy Agency, developing and emerging economies, excluding China, are home to 50% of internet users today but only 10% of the global data centre capacity. In 2024, 85% of data center electricity consumption came from the US (45%), China (25%), and Europe (15%) alone. These considerations compound the risk that AI can increase global income inequality and concentrate geopolitical power.

This disparity can deepen at the local level if data centres drive up local electricity costs or strain water supplies for nearby communities. Furthermore, if AI becomes a major source of GHG emissions, its corresponding contribution to climate change can compound these effects – almost all of which disproportionally harm lower-income communities and countries.
While AI is commonly viewed as a global force for increasing international rivalries, it also introduces opportunities for unprecedented levels of international collaboration, accessibility to data and information, and transparency in innovative technologies. Some major AI models have published their underlying algorithm as open-source. Now is the time to set norms for the industry that ensure AI offers the true benefit to humanity that it often proclaims. With enough intentionality, these norms can be just as revolutionary as the technology itself.

Overall, do you feel optimistic or pessimistic about the potential impacts of AI?

Optimistic. That is precisely why I am so adamant that we ensure that these technologies are used for good and mitigate the inevitable risks. Generally, if someone is so worried about AI that they think we should stop using it, I would humbly argue that they don’t see the multitudes of positive impact AI can have on technology innovation to advance economy-wide decarbonization efforts.

At the same time, if someone is not worried about the potential social and environmental impacts of AI, I would argue that they are either incredibly risk-tolerant or perhaps uninformed about the threats of an incredibly lucrative industry proceeding to maximize profit with no regard for human well-being.

Many AI proponents argue that energy efficiency will continue to improve, so we don’t need to worry about its energy footprint. While energy efficiency has improved significantly and will continue to, so has AI’s overall energy footprint. Jevon’s Paradox pointed out over a century ago that efficiency gains, which also reduce the price of inputs, may have the counterintuitive effect of increasing overall consumption of those inputs – in this case, electricity demand. In the absence of some reasonable guardrails imposed by consumers, researchers or policymakers, AI can have significant negative social and environmental impacts around the world.

Nonetheless, these are great technologies with immense potential for positively impacting the world. In these early stages, it is key to set norms that optimize for social welfare and exert some degree of consumer power to influence the practices of these AI companies.

Featuring

William Babis

Associate Scientist

SEI US