Skip navigation
Perspective

Technological disruption: Will artificial intelligence solve global problems or widen equity gaps?

A Fourth Industrial Revolution has begun, created by technological advances that have radically altered how we live, work and interact. Artificial intelligence (AI) is a key driver of this disruption, and is forging ahead, largely in the absence of regulation. (Indeed, one could wonder whether this was written by a human or  ChatGPT.)  What are the implications for society? Will humanity wield AI in ways that help solve climate challenges? Or will this tool widen existing global disparities?

Ulrika Lamberth / Published on 20 December 2022

This perspective is part of SEI’s “Currents 2023” project examining key global issues on the horizon. Join us for the online event on 11 January.

Autonomous driving bus, Weltenburg Monastery, Kelheim, Bavaria, Germany, 7 August 2021

A self-driving vehicle. Photo: Bernd Dittrich / Unsplash.

Advances in machine learning and sheer computer power have changed how science is conducted and how we live our lives. At this frontier, technologies are making it possible to harness information in ways that previously would have impossible, to wrestle with data at speeds that were once unimaginable, and to upend the status quo in ways that still not completely understood.

As these new tools come into wider use, questions are growing. How will they be used? Who will control them? Will their enormous power help solve global sustainability and climate issues? Or will new technologies exacerbate existing economic and social disparities – and potentially generate new problems and risks?

AI has the potential to personalize learning, to accelerate medical advances, and to help solve the climate crisis. Indeed, SEI’s own research has begun to tap AI to answer climate questions that would otherwise not be possible to address.

A digital divide at the digital frontier

At the same time, machine learning poses unique dangers – with the power to turn governments and private industries into surveillance states that weaponize social media. Artificial “intelligence” is only as smart as the quality of the underpinning data. There is enormous potential for AI learning itself to be skewed, and for “black boxes” to exist because of the absence of the collection of certain types of information needed to draw a complete and accurate picture.

At the digital frontier, a digital divide is already evident. Most AI research is being conducted in countries with certain shared characteristics: high incomes, high levels of education, large populations, good infrastructure and internet access, and conditions that foster entrepreneurship. Most data are also concentrated among a handful of countries and corporations. These pioneers have “the first-mover advantage”, giving them an edge – when it comes to data, digital infrastructure, research agendas, capital and their ability to set the terms for others to engage in related governance and ethical debates. Inequalities – evident in so many aspects of life – are already an issue in AI; the most recent Government AI Readiness Index by Oxford Insights lays bare the divide: The average readiness score for North America and western Europe is more than twice that of sub-Saharan Africa and central and south Asia – the two lowest-ranked regions. Despite calls for inclusive AI practices,  power dynamics on new technologies sound like those of the old ones – raising concerns a lack of inclusion of Global South researchers into governance issues, and the potential for extractive and exploitative practices to arise.

Anticipating and confronting superintelligence

Even now, some experts warn that we must begin to confront the prospect of “superintelligence” that will one day exceed humans’ cognitive skills – making the technology difficult to control and even posing existential risks. The technological disruption has begun in the workplace, with some low-skilled work increasingly automated, and some high-skilled work commanding ever-higher salaries. Indeed, one might wonder whether this perspective was written by ChatGPT. Down the line, will the machines and robots make human beings yet one more endangered species?

Governing AI

Despite these questions, AI is rushing ahead – largely free of regulations, and powered by algorithms and the data we freely hand over when we use our high-tech devices. These technologies may even be altering our own collective psychology in the process.

Worldwide, social and economic disparities are expanding. Will technology simply be another force that exacerbates growing divides? What is needed to make these technological tools more open, transparent and democratic so that they can help those in need? Is a global ethical framework needed, and would such a thing have any force?

What are the implications – for equality, data protection, privacy, and transparency in decision-making? What can ensure that technological power is used as a force for good rather than as a tool that further divides people?

Written by

Ulrika Lamberth
Ulrika Lamberth

Senior Press Officer

Communications

SEI Headquarters

Design and development by Soapbox.