Table of Contents
Hollywood’s worst-case scenario involving artificial intelligence (AI) is familiar as a blockbuster sci-fi film: Machines acquire humanlike intelligence, achieving sentience, and inevitably turn into evil overlords that attempt to destroy the human race. This narrative capitalizes on our innate fear of technology, a reflection of the profound change that often accompanies new technological developments.
However, as Malcolm Murdock, machine-learning engineer and author of the 2019 novel
The Quantum Price, puts it, “AI doesn’t have to be sentient to kill us all. There are plenty of other scenarios that will wipe us out before sentient AI becomes a problem.”
“We are entering dangerous and uncharted territory with the rise of surveillance and tracking through data, and we have almost no understanding of the potential implications.”
—Andrew Lohn, Georgetown University
In interviews with AI experts,
IEEE Spectrum has uncovered six real-world AI worst-case scenarios that are far more mundane than those depicted in the movies. But they’re no less dystopian. And most don’t require a malevolent dictator to bring them to full fruition. Rather, they could simply happen by default, unfolding organically—that is, if nothing is done to stop them. To prevent these worst-case scenarios, we must abandon our pop-culture notions of AI and get serious about its unintended consequences.
1. When Fiction Defines Our Reality…
Unnecessary tragedy may strike if we allow fiction to define our reality. But what choice is there when we can’t tell the difference between what is real and what is false in the digital world?
In a terrifying scenario, the rise of deepfakes—fake images, video, audio, and text generated with advanced machine-learning tools—may someday lead national-security decision-makers to take real-world action based on false information, leading to a major crisis, or worse yet, a war.
Andrew Lohn, senior fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), says that “AI-enabled systems are now capable of generating disinformation at [large scales].” By producing greater volumes and variety of fake messages, these systems can obfuscate their true nature and optimize for success, improving their desired impact over time.
The mere notion of deepfakes amid a crisis might also cause leaders to hesitate to act if the validity of information cannot be confirmed in a timely manner.
Marina Favaro, research fellow at the Institute for Research and Security Policy in Hamburg, Germany, notes that “deepfakes compromise our trust in information streams by default.” Both action and inaction caused by deepfakes have the potential to produce disastrous consequences for the world.
2. A Dangerous Race to the Bottom
When it comes to AI and national security, speed is both the point and the problem. Since AI-enabled systems confer greater speed benefits on its users, the first countries to develop military applications will gain a strategic advantage. But what design principles might be sacrificed in the process?
Things could unravel from the tiniest flaws in the system and be exploited by hackers.
Helen Toner, director of strategy at CSET, suggests a crisis could “start off as an innocuous single point of failure that makes all communications go dark, causing people to panic and economic activity to come to a standstill. A persistent lack of information, followed by other miscalculations, might lead a situation to spiral out of control.”
Vincent Boulanin, senior researcher at the Stockholm International Peace Research Institute (SIPRI), in Sweden, warns that major catastrophes can occur “when major powers cut corners in order to win the advantage of getting there first. If one country prioritizes speed over safety, testing, or human oversight, it will be a dangerous race to the bottom.”
For example, national-security leaders may be tempted to delegate decisions of command and control, removing human oversight of machine-learning models that we don’t fully understand, in order to gain a speed advantage. In such a scenario, even an automated launch of missile-defense systems initiated without human authorization could produce unintended escalation and lead to nuclear war.
3. The End of Privacy and Free Will
With every digital action, we produce new data—emails, texts, downloads, purchases, posts, selfies, and GPS locations. By allowing companies and governments to have unrestricted access to this data, we are handing over the tools of surveillance and control.
With the addition of facial recognition, biometrics, genomic data, and AI-enabled predictive analysis, Lohn of CSET worries that “we are entering dangerous and uncharted territory with the rise of surveillance and tracking through data, and we have almost no understanding of the potential implications.”
Michael C. Horowitz, director of Perry World House, at the University of Pennsylvania, warns “about the logic of AI and what it means for domestic repression. In the past, the ability of autocrats to repress their populations relied upon a large group of soldiers, some of whom may side with society and carry out a coup d’etat. AI could reduce these kinds of constraints.”
The power of data, once collected and analyzed, extends far beyond the functions of monitoring and surveillance to allow for predictive control. Today, AI-enabled systems predict what products we’ll purchase, what entertainment we’ll watch, and what links we’ll click. When these platforms know us far better than we know ourselves, we may not notice the slow creep that robs us of our free will and subjects us to the control of external forces.
Mike McQuade
4. A Human Skinner Box
The ability of children to delay immediate gratification, to wait for the second marshmallow, was once considered a major predictor of success in life. Soon even the second-marshmallow kids will succumb to the tantalizing conditioning of engagement-based algorithms.
Social media users have become rats in lab experiments, living in human
Skinner boxes, glued to the screens of their smartphones, compelled to sacrifice more precious time and attention to platforms that profit from it at their expense.
Helen Toner of CSET says that “algorithms are optimized to keep users on the platform as long as possible.” By offering rewards in the form of likes, comments, and follows, Malcolm Murdock explains, “the algorithms short-circuit the way our brain works, making our next bit of engagement irresistible.”
To maximize advertising profit, companies steal our attention away from our jobs, families and friends, responsibilities, and even our hobbies. To make matters worse, the content often makes us feel miserable and worse off than before. Toner warns that “the more time we spend on these platforms, the less time we spend in the pursuit of positive, productive, and fulfilling lives.”
5. The Tyranny of AI Design
Every day, we turn over more of our daily lives to AI-enabled machines. This is problematic since, as Horowitz observes, “we have yet to fully wrap our heads around the problem of bias in AI. Even with the best intentions, the design of AI-enabled systems, both the training data and the mathematical models, reflects the narrow experiences and interests of the biased people who program them. And we all have our biases.”
As a result,
Lydia Kostopoulos, senior vice president of emerging tech insights at the Clearwater, Fla.–based IT security company KnowBe4, argues that “many AI-enabled systems fail to take into account the diverse experiences and characteristics of different people.” Since AI solves problems based on biased perspectives and data rather than the unique needs of every individual, such systems produce a level of conformity that doesn’t exist in human society.
Even before the rise of AI, the design of common objects in our daily lives has often catered to a particular type of person. For example,
studies have shown that cars, hand-held tools including cellphones, and even the temperature settings in office environments have been established to suit the average-size man, putting people of varying sizes and body types, including women, at a major disadvantage and sometimes at greater risk to their lives.
When individuals who fall outside of the biased norm are neglected, marginalized, and excluded, AI turns into a Kafkaesque gatekeeper, denying access to customer service, jobs, health care, and much more. AI design decisions can restrain people rather than liberate them from day-to-day concerns. And these choices can also transform some of the worst human prejudices into racist and sexist
hiring and mortgage practices, as well as deeply flawed and biased sentencing outcomes.
6. Fear of AI Robs Humanity of Its Benefits
Since today’s AI runs on data sets, advanced statistical models, and predictive algorithms, the process of building machine intelligence ultimately centers around mathematics. In that spirit, said Murdock, “linear algebra can do insanely powerful things if we’re not careful.” But what if people become so afraid of AI that governments regulate it in ways that rob humanity of AI’s many benefits? For example, DeepMind’s AlphaFold program achieved a major breakthrough in predicting how amino acids fold into proteins,
making it possible for scientists to identify the structure of 98.5 percent of human proteins. This milestone will provide a fruitful foundation for the rapid advancement of the life sciences. Consider the benefits of improved communication and cross-cultural understanding made possible by seamlessly translating across any combination of human languages, or the use of AI-enabled systems to identify new treatments and cures for disease. Knee-jerk regulatory actions by governments to protect against AI’s worst-case scenarios could also backfire and produce their own unintended negative consequences, in which we become so scared of the power of this tremendous technology that we resist harnessing it for the actual good it can do in the world.
This article appears in the January 2022 print issue as “AI’s Real Worst-Case Scenarios.”
https://spectrum.ieee.org/ai-worst-case-scenarios