Silicon Valley’s Favorite Slogan Has Lost All Meaning

How one specific observation became a hyped-up law of everything

Close-up of an integrated circuit from 1982
Erich Hartmann / Magnum

In early 2021, long before ChatGPT became a household name, OpenAI CEO Sam Altman self-published a manifesto of sorts, titled “Moore’s Law for Everything.” The original Moore’s Law, formulated in 1965, describes the development of microchips, the tiny silicon wafers that power your computer. More specifically, it predicted that the number of transistors that engineers could cram onto a chip would roughly double every year. As Altman sees it, something like that astonishing rate of progress will soon apply to housing, food, medicine, education—everything. The vision is nothing short of utopian. We ride the exponential curve all the way to paradise.

In late February, Altman invoked Moore again, this time proposing “a new version of moore’s law that could start soon: the amount of intelligence in the universe doubles every 18 months.” This claim did not go unchallenged: “Oh dear god what nonsense,” replied Grady Booch, the chief scientist for software engineering at IBM Research. But whether astute or just absurd, Altman’s comment is not unique: Technologists have been invoking and adjusting Moore’s Law to suit their own ends for decades. Indeed, when Gordon Moore himself died last month at the age of 94, the legendary engineer and executive, who in his lifetime built one of the world’s largest semiconductor companies and made computers accessible to hundreds of millions of people, was remembered most of all for his prediction—and also, perhaps, for the optimism it inspired.

Which makes sense: Moore’s Law defined at least half a century of technological progress and, in so doing, helped shape the world as we know it. It’s no wonder that all manner of technologists have latched on to it. They want desperately to believe—and for others to believe—that their technology will take off in the same way microchips did. In this impulse, there is something telling. To understand the appeal of Moore’s Law is to understand how a certain type of Silicon Valley technologist sees the world.


The first thing to know about Moore’s Law is that it isn’t a law at all—not in a legalistic sense, not in a scientific sense, not in any sense, really. It’s more of an observation. In an article for Electronics magazine published 58 years ago this week, Moore noted that the number of transistors on each chip had been doubling every year. This remarkable progress (and associated drop in costs), he predicted, would continue for at least the next decade. And it did—for much longer, in fact. Depending on whom you ask and how they choose to interpret the claim, it may have held until 2005, or the present day, or some point in between.

Carver Mead, an engineering professor at the California Institute of Technology, was the first to call Moore’s observation a “law.” By the early 1980s, that phrase—Moore’s Law—had become the slogan for a nascent industry, says Cyrus Mody, a science historian at Maastricht University, in the Netherlands, and the author of The Long Arm of Moore’s Law. With the U.S. economy having spent the better part of the past decade in the dumps, he told me, a message of relentless progress had PR appeal. Companies could say, “‘Look, our industry is so consistently innovative that we have a law.’”

This wasn’t just spin. Microchip technology really had developed according to Moore’s predicted schedule. As the tech got more and more intricate, Moore’s Law became a sort of metronome by which the industry kept time. That rhythm was a major asset. Silicon Valley executives were making business-strategy decisions on its basis, David C. Brock, a science historian who co-wrote a biography of Gordon Moore, told me.

For a while, the annual doubling of transistors on a chip seemed like magic: It happened year after year, even though no one was shooting for that specific target. At a certain point, though, when the industry realized the value of consistency, Moore’s Law morphed into a benchmark to be reached through investment and planning, and not simply a phenomenon to be taken for granted, like gravity or the tides. “It became a self-fulfilling prophecy,” Paul Ceruzzi, a science historian and a curator emeritus at the National Air and Space museum, told me.

Still, for almost as long as Moore’s Law has existed, people have foretold its imminent demise. If they were wrong, that’s in part because Moore’s original prediction has been repeatedly tweaked (or outright misconstrued), whether by extending his predicted doubling time, or by stretching his meaning of a single chip, or by focusing on computer power or performance instead of the raw number of transistors. Once Moore’s Law had been fudged in all these ways, the floodgates opened to more extravagant and brazen reinterpretations. Why not apply the law to pixels, to drugs, to razor blades?

An endless run of spin-offs ensued. Moore’s Law of cryptocurrency. Moore’s Law of solar panels. Moore’s Law of intelligence. Moore’s Law for everything. Moore himself used to quip that his law had come to stand for just about any supposedly exponential technological growth. That’s another law, I guess: At every turn of the technological-hype cycle, Moore’s Law will be invoked.


The reformulation of Moore’s observation as a law, and then its application to a new technology, creates an air of Newtonian precision—as if that new technology could only grow in scale. It transforms something you want to happen into something that will happen—technology as destiny.

For decades, that shift has held a seemingly irresistible appeal. More than 20 years ago, the computer scientist Ray Kurzweil fit Moore’s Law into a broad argument for the uninterrupted exponential progress of technology over the past century—a trajectory that he still believes is drawing us toward “the Singularity.” In 2011, Elon Musk professed to be searching for a “Moore’s Law of Space.” A year later, Mark Zuckerberg posited a “social-networking version of Moore’s Law,” whereby the rate at which users share content on Facebook would double every year. (Look how that turned out.) More recently, in 2021, Changpeng Zhao, the CEO of the cryptocurrency exchange Binance, cited Moore’s Law as evidence that “blockchain performance should at least double every year.” But no tech titan has been quite as explicit in their assertions as Sam Altman. “This technological revolution,” he says in his essay, “is unstoppable.” No one can resist it. And no one can be held responsible.

Moore himself did not think that technological progress was inevitable. “His whole life was a counterexample to that idea,” Brock told me. “Quietly measuring what was actually happening, what was actually going on with the technology, what was actually going on with the economics, and acting accordingly”—that was what Moore was about. He constantly checked and rechecked his analysis, making sure everything still held up. You don’t do that if you believe you have hit upon an ironclad law of nature. You don’t do that if you believe in the unstoppable march of technological progress.

Moore recognized that his law would eventually run up against a brick wall, some brute fact of physics that would halt it in its tracks—the size of an atom, the speed of light. Or worse, it would cause catastrophe before it did. “The nature of exponentials is that you push them out,” he said in a 2005 interview with Techworld magazine, “and eventually disaster happens.”

Exactly what sort of disaster Moore envisioned is unclear. Brock, his biographer, suspects that it might have been ecological ruin; Moore was, after all, a passionate conservationist. Perhaps he viewed microchips as a sort of invasive species, multiplying and multiplying at the expense of the broader human ecosystem. Whatever the particulars, he was an optimist, not a utopian. And yet, the law bearing his name is now cited in support of a worldview that was not his own. That is the tragedy of Moore’s Law.

Jacob Stern is a contributing writer at The Atlantic.