The White House Already Knows How to Make AI Safer

The US already has a road map for the deployment of AI systems. Biden's promised executive order just needs to put these guidelines into practice.
Collage of the president's hand signed an executive order a microchip pattern and street guardrails
Photo-illustration: WIRED Staff; Getty Images

Ever since the White House released the Blueprint for an AI Bill of Rights last fall (a document that I helped develop during my time at the Office of Science and Technology Policy), there’s been a steady drip of announcements from the executive branch, including requests for information, strategic plan drafts, and regulatory guidance. The latest entry in this policy pageant, announced last week, is that the White House got the CEOs of the most prominent AI-focused companies to voluntarily commit to being a little more careful about checking the systems they roll out.

There are a few sound practices within these commitments: We should carefully test AI systems for potential harms before deploying them; the results should be evaluated independently; and companies should focus on designing AI systems that are safe to begin with, rather than bolting safety features on after the fact. The problem is that these commitments are vague and voluntary. “Don’t be evil,” anyone?

Legislation is needed to ensure that private companies live up to their commitments. But we should not forget the federal market’s outsize influence on AI practices. As a large employer and user of AI technology, a major customer for AI systems, a regulator, and a source of funding for so many state-level actions, the federal government can make a real difference by changing how it acts, even in the absence of legislation.

If the government actually wants to make AI safer, it must issue the executive order promised at last week’s meeting, alongside specific guidance that the Office of Management and Budget—the most powerful office you’ve never heard of—will give to agencies. We don’t need innumerable hearings, forums, requests for information, or task forces to figure out what this executive order should say. Between the Blueprint and the AI risk management framework developed by the National Institute of Standards and Technology (NIST), we already have a road map for how the government should oversee the deployment of AI systems in order to maximize their ability to help people and minimize the likelihood that they cause harm.

The Blueprint and NIST frameworks are detailed and extensive and together add up to more than 130 pages. They lay out important practices for every stage of the process of developing these systems: how to involve all stakeholders (including the public and its representatives) in the design process; how to evaluate whether the system as designed will serve the needs of all—and whether it should be deployed at all; and how to test and independently evaluate for system safety, effectiveness, and bias mitigation prior to deployment. These frameworks also outline how to continually monitor systems after deployment to ensure that their behavior has not deteriorated. They stipulate that entities using AI systems must offer full disclosure of where they are being used and clear and intelligible explanations of why a system produces a particular prediction, outcome, or recommendation for an individual. The guidelines also describe mechanisms for individuals to appeal and request recourse in a timely manner when systems fail or produce unfavorable outcomes, and what an overarching governance structure for these systems should look like. All of these recommendations are backed by concrete implementation guidelines and reflect over a decade of research and development in responsible AI.

An executive order can enshrine these best practices in at least four ways. First, it could require all government agencies developing, using, or deploying AI systems that affect people’s lives and livelihoods to ensure that these systems comply with best practices. For example, the federal government might make use of AI to determine eligibility for public benefits and identify irregularities that might trigger an investigation. A recent study showed that IRS auditing algorithms might be implicated in disproportionately high audit rates for Black taxpayers. If the IRS were required to comply with these guidelines, it would have to address this issue promptly.

Second, it could instruct any federal agency procuring an AI system that has the potential to “meaningfully impact [our] rights, opportunities, or access to critical resources or services” to require that the system comply with these practices and that vendors provide evidence of this compliance. This recognizes the federal government’s power as a customer to shape business practices. After all, it is the biggest employer in the country and could use its buying power to dictate best practices for the algorithms that are used to, for instance, screen and select candidates for jobs.

Third, the executive order could demand that anyone taking federal dollars (including state and local entities) ensure that the AI systems they use comply with these practices. This recognizes the important role of federal investment in states and localities. For example, AI has been implicated in many components of the criminal justice system, including predictive policing, surveillance, pre-trial incarceration, sentencing, and parole. Although most law enforcement practices are local, the Department of Justice offers federal grants to state and local law enforcement and could attach conditions to these funds stipulating how to use the technology.

Finally, this executive order could direct agencies with regulatory authority to update and expand their rulemaking to processes within their jurisdiction that include AI. Some initial efforts to regulate entities using AI with medical devices, hiring algorithms, and credit scoring are already underway, and these initiatives could be further expanded. Worker surveillance and property valuation systems are just two examples of areas that would benefit from this kind of regulatory action.

Of course, the testing and monitoring regime for AI systems that I’ve outlined here is likely to provoke a range of concerns. Some may argue, for example, that other countries will overtake us if we slow down to implement such guardrails. But other countries are busy passing their own laws that place extensive restrictions on AI systems, and any American businesses seeking to operate in these countries will have to comply with their rules. The EU is about to pass an expansive AI Act that includes many of the provisions I described above, and even China is placing limits on commercially deployed AI systems that go far beyond what we are currently willing to consider.

Others may express concern that this expansive set of requirements might be hard for a small business to comply with. This could be addressed by linking the requirements to the degree of impact: A piece of software that can affect the livelihoods of millions should be thoroughly vetted, regardless of how big or how small the developer is. An AI system that individuals use for recreational purposes shouldn’t be subject to the same strictures and restrictions.

There are also likely to be concerns about whether these requirements are practical. Here again, it’s important not to underestimate the federal government’s power as a market maker. An executive order that calls for testing and validation frameworks will provide incentives for businesses that want to translate best practices into viable commercial testing regimes. The responsible AI sector is already filling with firms that provide algorithmic auditing and evaluation services, industry consortia that issue detailed guidelines vendors are expected to comply with, and large consulting firms that offer guidance to their clients. And nonprofit, independent entities like Data and Society (disclaimer: I sit on their board) have set up entire labs to develop tools that assess how AI systems will affect different populations.

We’ve done the research, we’ve built the systems, and we’ve identified the harms. There are established ways to make sure that the technology we build and deploy can benefit all of us while reducing harms for those who are already buffeted by a deeply unequal society. The time for studying is over—now the White House needs to issue an executive order and take action.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at ideas@wired.com.