The AI Debate Is Happening in a Cocoon

The risks posed by new technologies are not science fiction. They are real.

A robot hand pulling pages off of a calendar
Matteo Giuseppe Pani; Source: Getty

Much of the time, discussions about artificial intelligence are far removed from the realities of how it’s used in today’s world. Earlier this year, executives at Anthropic, Google DeepMind, OpenAI, and other AI companies declared in a joint letter that “mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” In the lead-up to the AI summit that he recently convened, British Prime Minister Rishi Sunak warned that “humanity could lose control of AI completely.” Existential risks—or x-risks, as they’re sometimes known in AI circles—evoke blockbuster science-fiction movies and play to many people’s deepest fears.

But AI already poses economic and physical threats—ones that disproportionately harm society’s most vulnerable people. Some individuals have been incorrectly denied health-care coverage, or kept in custody based on algorithms that purport to predict criminality. Human life is explicitly at stake in certain applications of artificial intelligence, such as AI-enabled target-selection systems like those the Israeli military has used in Gaza. In other cases, governments and corporations have used artificial intelligence to disempower members of the public and conceal their own motivations in subtle ways: in unemployment systems designed to embed austerity politics; in worker-surveillance systems meant to erode autonomy; in emotion-recognition systems that, despite being based on flawed science, guide decisions about whom to recruit and hire.

Our organization, the AI Now Institute, was among a small number of watchdog groups present at Sunak’s summit. We sat at tables where world leaders and technology executives pontificated over threats to hypothetical (disembodied, raceless, genderless) “humans” on the uncertain horizon. The event underscored how most debates about the direction of AI happen in a cocoon.

The term artificial intelligence has meant different things over the past seven decades, but the current version of AI is a product of the enormous economic power that major tech firms have amassed in recent years. The resources needed to build AI at scale—massive data sets, access to computational power to process them, highly skilled labor—are profoundly concentrated among a small handful of firms. And the field’s incentive structures are shaped by the business needs of industry players, not by the public at large.

“In Battle With Microsoft, Google Bets on Medical AI Program to Crack Healthcare Industry,” a Wall Street Journal headline declared this summer. The two tech giants are racing each other, and smaller competitors, to develop chatbots intended to help doctors—particularly those working in under-resourced clinical settings—retrieve data quickly and find answers to medical questions. Google has tested a large language model called Med-PaLM 2 in several hospitals, including within the Mayo Clinic system. The model has been trained on the questions and answers to medical-licensing exams.

The tech giants excel at rolling out products that work reasonably well for most people but that fail entirely for others, almost always people structurally disadvantaged in society. The industry’s tolerance for such failures is an endemic problem, but the danger they pose is greatest in health-care applications, which must operate at a high standard of safety. Google’s own research raises significant doubts. According to a July article in Nature by company researchers, clinicians found that 18.7 percent of answers produced by a predecessor AI system, Med-PaLM, contained “inappropriate or incorrect content”—in some instances, errors of great clinical significance—and 5.9 percent of answers were likely to contribute to some level of harm, including “death or severe harm” in a few cases. A preprint study, not yet peer reviewed, suggests that Med-PaLM 2 performs better on a number of measures, but many aspects of the model, including the extent to which doctors are using it in discussions with real-life patients, remain mysterious.

“I don’t feel that this kind of technology is yet at a place where I would want it in my family’s healthcare journey,” Greg Corrado, a senior research director at Google who worked on the system, told The Wall Street Journal. The danger is that such tools will become enmeshed in medical practice without any formal, independent evaluation of their performance or their consequences.

The policy advocacy of industry players is expressly designed to evade scrutiny for the technology they’re already releasing for public use. Big AI companies wave off concerns about their own market power, their enormous incentives to engage in rampant data surveillance, and the potential impact of their technologies on the labor force, especially workers in creative industries. The industry instead attends to hypothetical dangers posed by “frontier AI” and shows great enthusiasm for voluntary measures such as “red-teaming,” in which companies deploy groups of hackers to simulate hostile attacks on their own AI systems, on their own terms.

Fortunately, the Biden administration is focusing more intently than Sunak on more immediate risks. Last week, the White House released a landmark executive order encompassing a wide-ranging set of provisions addressing AI’s effects on competition, labor, civil rights, the environment, privacy, and security. In a speech at the U.K. summit, Vice President Kamala Harris emphasized urgent threats, such as disinformation and discrimination, that are evident right now. Regulators elsewhere are taking the problem seriously too. The European Union is finalizing a law that would, among other things, impose far-reaching controls on AI technologies that it deems to be high risk and force companies to disclose summaries of which copyrighted data they use to train AI tools. Such measures annoy the tech industry—earlier this year, OpenAI’s CEO, Sam Altman, accused the EU of “overregulating” and briefly threatened to pull out of the bloc—but are well within the proper reach of democratic lawmaking.

The United States needs a regulatory regime that scrutinizes the many applications of AI systems that have already come into wide use in cars, schools, workplaces, and elsewhere. AI companies that flout the law have little to fear. (When the Federal Trade Commission fined Facebook $5 billion in 2019 for data-privacy violations, it was one of the largest penalties the government had ever assessed on anyone—and a minor hindrance to a highly profitable company.) The most significant AI development is taking place on top of the infrastructures owned and operated by a few Big Tech firms. A major risk in this environment is that executives at the biggest firms will successfully present themselves as the only real experts in artificial intelligence and expect regulators and lawmakers to stand aside.

Americans shouldn’t let the same firms that built the broken surveillance business model for the internet also set self-serving terms for the future trajectory of AI. Citizens and their democratically elected representatives need to reclaim the debate about whether (not just how or when) AI systems should be used. Notably, many of the biggest advances in tech regulation in the United States, such as bans by individual cities on police use of facial recognition and state limits on worker surveillance, began with organizers in communities of color and labor-rights movements that are typically underrepresented in policy conversations and in Silicon Valley. Society should feel comfortable drawing red lines to prohibit certain kinds of activities: using AI to predict criminal behavior, making workplace decisions based on pseudoscientific emotion-recognition systems.

The public has every right to demand independent evaluation of new technologies and to publicly deliberate on these outcomes, to seek access to the data sets that are used to train AI systems, and to define and prohibit categories of AI that should never be built at all—not just because they might someday start enriching uranium or engineering deadly pathogens on their own initiative but because they violate citizens’ rights or endanger human health in the near term. The well-funded campaign to reset the AI-policy agenda to threats on the frontier gives a free pass to companies with stakes in the present. The first step in asserting public control over AI is to seriously rethink who is leading the conversation on AI-regulation policy and whose interests such conversations serve.

Amba Kak, the executive director of the AI Now Institute, is a former global-policy adviser at Mozilla and a former senior adviser on artificial intelligence at the Federal Trade Commission.
Sarah Myers West, the managing director of the AI Now Institute, is a visiting research scientist at the Network Science Institute at Northeastern University and a former senior adviser on artificial intelligence at the Federal Trade Commission.