The AI Crackdown Is Coming

Five ways for Washington to hold Silicon Valley accountable

a digital wireframe of a column
Illustration by Joanne Imperio / The Atlantic

In April, lawyers for the airline Avianca noticed something strange. A passenger, Robert Mata, had sued the airline, alleging that a serving cart on a flight had struck and severely injured his left knee, but several cases cited in Mata’s lawsuit didn’t appear to exist. The judge couldn’t verify them, either. It turned out that ChatGPT had made them all up, fabricating names and decisions. One of Mata’s lawyers, Steven A. Schwartz, had used the chatbot as an assistant—his first time using the program for legal research—and, as Schwartz wrote in an affidavit, “was unaware of the possibility that its content could be false.”

The incident was only one in a litany of instances of generative AI spreading falsehoods, not to mention financial scams, nonconsensual porn, and more. Tech companies are marketing their AI products and potentially reaping enormous profits, with little accountability or legal oversight for the real-world damage those products can cause. The federal government is now trying to catch up.

Late last month, the Biden administration announced that seven tech companies at the forefront of AI development had agreed to a set of voluntary commitments to ensure that their products are “safe, secure, and trustworthy.” Those commitments follow a flurry of White House summits on AI, congressional testimonies on regulating the technology, and declarations from various government agencies that they are taking AI seriously. In the announcement, OpenAI, Microsoft, Google, Meta, and others pledged to subject their products to third-party testing, invest in bias reduction, and be more transparent about their AI systems’ capabilities and limitations.

The language is promising but also just a promise, lacking enforcement mechanisms and details about next steps. Regulating AI requires a lumbering bureaucracy to take on notoriously secretive companies and rapidly evolving technologies. Much of the Biden administration’s language apes tech luminaries’ PR lines about their products’ world-ending capacities, such as bioweapons and machines that “self-replicate.” Government action will be essential for safeguarding people’s lives and livelihoods—not just from the supposed long-term threat of evil, superintelligent machines, but also from everyday threats. Generative AI has already exhibited gross biases and potential for misuse. And for more than a decade, less advanced but similarly opaque and often discriminatory algorithms have been used to screen résumés and determine credit scores, in diagnostic software, and as part of facial-recognition tools.

I spoke with a number of experts and walked away with a list of five of the most effective ways the government could regulate AI to protect the country against the tech’s quotidian risks, as well as its more hypothetical, apocalyptic dangers.

1. Don’t take AI companies’ word on anything.

A drug advertised for chemotherapy has to demonstrably benefit cancer patients in clinical trials, such as by shrinking tumors, and then get FDA approval. Then its manufacturer has to disclose side effects patients might experience. But no such accountability exists for AI products. “Companies are making claims about AI being able to do X or Y thing, but then not substantiating that they can,” Sarah Myers West, the managing director of the AI Now Institute and a former senior FTC adviser on AI, told me. Numerous tech firms have been criticized for misrepresenting how biased or effective their algorithms are, or providing almost no evidence with which to evaluate them.

Mandating that AI tools undergo third-party testing to ensure that they meet agreed-upon metrics of bias, accuracy, and interpretability “is a really important first step,” Alexandra Givens, the president of the Center for Democracy and Technology, a nonprofit that advocates for privacy and human rights on the internet and receives some funding from the tech industry, told me. Companies could be compelled to disclose information about how their programs were trained, the software’s limitations, and how they mitigated potential harms. “Right now, there’s extraordinary information asymmetry,” she said—tech companies tend to reveal very little about how they train and validate their software. An audit could involve testing how often, say, a computer-vision program misrecognizes Black versus white faces or whether chatbots associate certain jobs with stereotypical gender roles (ChatGPT once stated that attorneys cannot be pregnant, because attorneys must be men).

All of the experts I spoke with agreed that the tech companies themselves shouldn’t be able to declare their own products safe. Otherwise, there is a substantial risk of “audit washing”—in which a dangerous product gains legitimacy from a meaningless stamp of approval, Ellen Goodman, a law professor at Rutgers, told me. Although numerous proposals currently call for after-the-fact audits, others have called for safety assessments to start much earlier. The potentially high-stakes applications of AI mean that these companies should “have to prove their products are not harmful before they can release them into the marketplace,” Safiya Noble, an internet-studies scholar at UCLA, told me.

Clear benchmarks and licenses are also crucial: A government standard would not be effective if watered down, and a hodgepodge of safety labels would breed confusion to the point of being illegible, similar to the differences among free-range, cage-free, and pasture-raised eggs.

2. We don’t need a Department of AI.

Establishing basic assessments of and disclosures about AI systems wouldn’t require a new government agency, even though that’s what some tech executives have called for. Existing laws apply to many uses for AI: therapy bots, automated financial assistants, search engines promising truthful responses. In turn, the relevant federal agencies have the subject expertise to enforce those laws; for instance, the FDA could have to assess and approve a therapy bot like a medical device. “In naming a central AI agency that’s going to do all the things, you lose the most important aspect of algorithmic assessment,” Givens said, “which is, what is the context in which it is being deployed, and what is the impact on that particular set of communities?”

A new AI department could run the risk of creating regulatory capture, with major AI companies staffing, advising, and lobbying the agency. Instead, experts told me, they’d like to see more funding for existing agencies to hire staff and develop expertise on AI, which might require action from Congress. “There could be a very aggressive way in which existing enforcement agencies could be more empowered to do this if you provided them more resources,” Alex Hanna, the director of research at the Distributed AI Research Institute, told me.

3. The White House can lead by example.

Far-reaching legislation to regulate AI could take years and face challenges from tech companies in court. Another, possibly faster approach could involve the federal government acting by example in the AI models it uses, the research it supports, and the funding it disburses. For instance, earlier this year, a federal task force recommended that the government commit $2.6 billion to funding AI research and development. Any company hoping to access those resources could be forced to meet a number of standards, which could lead to industry-wide adoption—somewhat akin to the tax incentives and subsidies encouraging green energy in the Inflation Reduction Act.

The government is also a major purchaser and user of AI itself, and could require its vendors to subject themselves to audits and release transparency reports. “The biggest thing the Biden administration can do is make it binding administration policy that AI can only be purchased, developed, used if it goes through meaningful testing for safety, efficacy, nondiscrimination, and protecting people’s privacy,” Givens told me.

4. AI needs a tamper-proof seal.

Deepfakes and other synthetic media—images, videos, and audio clips that an AI system can whip up in seconds—have already spread misinformation and been used in nonconsensual pornography. Last month’s voluntary commitments include developing a watermark to tell users they are interacting with AI-generated content, but the language is vague and the path forward unclear. Many existing methods of watermarking, such as the block of rainbow pixels at the bottom of any image generated by DALL-E 2, are easy to manipulate or remove. A more robust method would involve logging where, when, and how a piece of media was created—like a digital stamp from a camera—as well as every edit it undergoes. Companies including Adobe, Microsoft, and Sony are already working to implement one such standard, although such approaches might be difficult for the public to understand.

Sam Gregory, the executive director of the human-rights organization Witness, told me that government standards for labeling AI-generated content would need to be enforced throughout the AI supply chain by everybody from the makers of text-to-image models to app and web-browser developers. We need a tamper-proof seal, not a sticker.

To encourage the adoption of a standard way to denote AI content, Goodman told me, the government could mandate that web browsers, computers, and other devices recognize the label. Such a mandate would be similar to the federal requirement that new televisions include a part, known as a “V-chip,” that recognizes the maturity ratings set by the TV industry, which parents can use to block programs.

5. Build ways for people to protect their work from AI.

Multiple high-profile lawsuits are currently accusing AI models, such as ChatGPT and the image-generator Midjourney, of stealing writers’ and artists’ work. Intellectual property has become central to debates over generative AI, and two general types of copyright infringement are at play: the images, text, and other data the models are trained on, and the images and text they spit back out.

On the input side, allegations that generative-AI models are violating copyright law may stumble in court, Daniel Gervais, a law professor at Vanderbilt, told me. Making copies of images, articles, videos, and other media online to develop a training dataset likely falls under “fair use,” because training an AI model on the material meaningfully transforms it. The standard for proving copyright violations on the output side may also pose difficulties, because proving that an AI output is similar to a specific copyrighted work—not just in the style of Kehinde Wiley, but the spitting image of one of his paintings—is a high legal threshold.

Gervais said he imagines that a market-negotiated agreement between rights-holders and AI developers will arrive before any sort of legal standard. In the EU, for instance, artists and writers can opt out of having their work used to train AI, which could incentivize a deal that’s in the interest of both artists and Silicon Valley. “Publishers see this as a source of income, and the tech companies have invested so much in their technology,” Gervais said. Another possible option would be an even more stringent opt-in standard, which would require anybody owning copyrighted material to provide explicit permission for their data to be used. In the U.S., Gervais said, an option to opt out may be unnecessary. A law passed to protect copyright on the internet makes it illegal to strip a file of its “copyright management information,” such as labels with the work’s creator and date of publication, and many observers allege that creating datasets to train generative AI violates that law. The fine for removing such information could run up to tens of thousands of dollars per work, and even higher for other copyright infringements—a financial risk that, multiplied by perhaps millions of violations in a dataset, could be too big for companies to take.


Few, if any, of these policies are guaranteed. They face numerous practical, political, and legal hurdles, not least of which is Silicon Valley’s formidable lobbying arm. Nor will such regulations alone be enough to stop all the ways the tech can negatively affect Americans. AI is rife with the privacy violations, monopolistic business practices, and poor treatment of workers, all of which have plagued the tech industry for years.


But some sort of regulation is coming: The Biden administration has said it is working on bipartisan legislation, and it promised guidance on the responsible use of AI by federal agencies before the end of summer; numerous bills are pending before Congress. Until then, tech companies may just continue to roll out new and untested products, no matter who or what is steamrolled in the process.

Matteo Wong is an associate editor at The Atlantic.