How Big Tech Got So Damn Big

If Silicon Valley CEOs were all exceptional, you’d expect the industry itself to be unique in its success and durability. It’s not.
Video: milindri/Getty Images

“Tech exceptionalism” is the sin of thinking that the normal rules don’t apply to technology.

The idea that you can lose money on every transaction but make it up with scale (looking at you, Uber)? Pure tech exceptionalism. The idea that you can take an unjust system like racist policing practices and fix it with technology? Exceptionalism. The idea that tech itself can’t be racist because computers are just doing math, and math can’t be racist? Utter exceptionalism.

Tech critics are usually good at pointing out tech exceptionalism when they see it, but there’s one tech exceptionalist blind spot. There’s one place where tech boosters and critics come together to sing the same song.

Both tech’s biggest boosters and its most savage critics agree that tech leaders—the Zuckerbergs, Jobses, Bezoses, Musks, Gateses, Brins, and Pages—are brilliant. Now, the boosters will tell you that these men are good geniuses whose singular vision and leadership have transformed the world, while the critics will tell you that these are evil geniuses whose singular vision and leadership have transformed the world … for the worse.

But one thing they all agree on: These guys are geniuses.

I get it. The empires our tech bro overlords built are some of the most valuable, influential companies in human history. They have bigger budgets than many nations. Their users outnumber the populations of any nation on Earth.

What’s more, it wasn’t always thus. Prior to the mid-2000s, tech was a dynamic, chaotic roil of new startups that rose to prominence and became household names in a few short years, only to be vanquished just as they were peaking, when a new company entered the market and toppled them.

Somehow, these new giants—the companies that have, in the words of New Zealand software developer Tom Eastman, transformed the internet into “a group of five websites, each consisting of screenshots of text from the other four”—interrupted that cycle of “disruption.” They didn’t just get big, they stayed big, and then they got bigger.

How did these tech companies succeed in maintaining the dominance that so many of their predecessors failed to attain? Was it their vision? Was it their leadership?

Nope.

If tech were led by exceptional geniuses whose singular vision made it impossible to unseat them, then you’d expect that the structure of the tech industry itself would be exceptional. That is, you’d expect that tech’s mass-extinction event, which turned the wild and wooly web into a few giant websites, was unique to tech, driven by those storied geniuses.

But that’s not the case at all. Nearly every industry in the world looks like the tech industry: dominated by a handful of giant companies that emerged out of a cataclysmic, 40-year die-off of smaller firms which either failed or were folded into the surviving giants.

Here’s a partial list of concentrated industries from the Open Markets Institute—industries where between one and five companies account for the vast majority of business: pharmaceuticals, health insurers, appliances, athletic shoes, defense contractors, book publishing, booze, drug stores, office supplies, eyeglasses, LCD glass, glass bottles, vitamin C, car parts, bottle caps, airlines, railroads, mattresses, Lasik lasers, cowboy boots, and candy.

If tech’s consolidation is down to the exceptional genius of its leaders, then they are part of a bumper crop of exceptional geniuses who all managed to rise to prominence in their respective firms and then steer them into positions where they crushed, bought, or sidelined all their competitors over the past 40 years or so.

Occam’s razor posits that the simplest explanation is most likely to be true. For that reason, I think we can safely reject the idea that sunspots, water contaminants, or gamma rays caused an exceptional generation of business leaders to be conceived all at the same time, all over the world.

Likewise, I am going to discount the possibility that, in the 1970s and 1980s, aliens came to Earth and knocked up the future mothers of a new subrace of elite CEOs whose extraterrestrial DNA conferred upon them the power to steer companies to total industrial dominance.

Not only do those explanations stretch the imagination, but they also ignore a simpler, far more tangible explanation for the incredible die-off of businesses in every industry. Forty years ago, countries all over the world altered the basis on which they enforced their competition laws—often called “antitrust” laws— to be more tolerant of monopolies. Forty years later, we have a lot of monopolies.

These facts are related.

Let’s have a quick refresher course on antitrust law, shall we? Antitrust was born in the late 19th century, when American industries had been consolidated through “trusts.” A trust is an organization that holds something of value “in trust” for someone else. For example, you might live near a conservation area that a group of donors bought and handed over to a trust to preserve and maintain. The trust is run by “trustees”—directors who oversee its assets.

In the 19th century, American robber barons got together and formed trusts: For example, a group of railroad owners could sell their shares to a “railroad trust” and become beneficiaries of the trust. The trustees—the same robber barons, or their representatives—would run the trust, deciding how to operate all these different, nominally competing railroads to maximize the return to the trustees (the railroads’ former owners).

A trust was a way of merging all the dominant companies in a single industry (or even multiple related industries, like oil refineries, railroads, pipelines, and oil wells) into a single company, while maintaining the fiction that all of these companies were their own businesses.

Any company that didn’t sell to the trust was quickly driven to its knees. For example, if you owned a freight company and wouldn’t sell out to the trust, all the railroads you depended on to carry your freight would charge you more than they’d charge your competitors for carrying the same freight—or they’d refuse to carry your freight at all.

What’s more, any business that supplied a trust would quickly find itself stripped of its profit margins and either bankrupted and absorbed by the trust, or allowed to eke out bare survival. If you supplied coal to the railroad trust, all the railroads would refuse to buy your coal unless you knocked your prices down until you were making next to nothing—or losing money. Hell, if you got too frisky, they might refuse to carry your coal from the coal mine to the market, and then where’d you be?

Enter the trustbusters, led by Senator John Sherman, author of the 1890 Sherman Act, America’s first antitrust law. In arguing for his bill, Sherman said to the Senate: “If we will not endure a King as a political power we should not endure a King over the production, transportation, and sale of the necessaries of life. If we would not submit to an emperor we should not submit to an autocrat of trade with power to prevent competition and to fix the price of any commodity.”

In other words, when a company gained too much power, it became the same kind of kingly authority that the colonists overthrew in 1776. Government “by the people, of the people, and for the people” was incompatible with concentrated corporate power from companies so large that they were able to determine how people lived their lives, made their incomes, and structured their cities and towns.

This theory of antitrust is called the “harmful dominance” theory, and it worked. In the early part of the 20th century, the largest commercial empires—such as John D. Rockefeller’s Standard Oil Company—were shattered by the application of the Sherman Act. As time went by, other antitrust laws like the Clayton Act and the FTC Act reaffirmed the harmful dominance approach to antitrust: the idea that the law should protect the public, workers, customers, and business owners from any harms resulting from excessive corporate power.

Not everybody liked this approach. Monopoly is a powerful and seductive idea. Starting a business often involves believing that you know something other people don’t, that you can see something others can’t see. Building that business up into a success only bolsters that view, proving that you possess the intellect, creativity, and drive to create something others can’t even conceive of.

In this light, competition seems wasteful: Why must you expend resources fighting off copycats and pretenders when you could be using that same time delighting your customers and enriching your shareholders? As Peter Thiel puts it, “Competition is for losers.”

The competition-is-for-losers set never let go of their dream of being autocrats of trade. They dreamed of a world where the invisible hand would tenderly lift them and set them down atop a throne of industry, from which they could direct the lives of millions of lesser beings who don’t know what they want until a man of vision shows it to them.

These autocrats-in-waiting were already wealthy, and they bankrolled fringe kooks who had very different ideas about the correct administration of antitrust.

Chief among these was Robert Bork, who was best known for having served as Nixon’s solicitor general, in which capacity he was complicit in a string of impeachable crimes against the American people, which led to his flunking his Senate confirmation hearing, when Ronald Reagan tried to elevate him to the Supreme Court.

Bork had some wild ideas. He agreed with the autocrat set that the antitrust law should permit them to seize control over the nation, but then, so did a lot of weirdo Renfields who sucked up to capitalism’s most powerful bloodsuckers.

What set Bork apart was his conviction that America’s antitrust laws already celebrated monopolies as innately efficient and beneficial. According to Bork’s theories, the existing antitrust statutes recognized that most monopolies were a great deal for “consumers,” and that if we only read the statutes carefully enough, and reviewed the transcripts of the legislative debates in fine-enough detail, we’d see that Congress never set out to block companies from gaining enough power to become autocrats of trade—rather, they only wanted the law to step in when the autocrats abused their power to harm “consumers.”

This is the “consumer welfare” standard, a theory as economically bankrupt as it is historically unsupportable. Let’s be clear here: The plain language of America’s antitrust laws make it very clear that Congress wanted to block monopolies because it worried about the concentration of corporate power, not just the abuse of that power. This is inarguable: Think of John Sherman stalking the floor of the Senate, railing against autocrats of trade, declaiming that “we should not endure a King over the production, transportation, and sale of the necessaries of life.” These are not the statements of a man who liked most monopolies and merely sought to restrain the occasional monopolist who lost sight of his duty to make life better for the public.

Setting aside this fantastical alternate history, Bork’s theories can sound plausible—at first. After all, if a company that buys up its suppliers or merges with its rivals can attain “economies of scale” and new efficiencies from putting all those businesses under one roof, then we, the consumers, might find ourselves enjoying lower prices and better products. Who wouldn’t want that?

But “consumer” is only one of our aspects in society. We are also “workers,” “parents,” “residents,” and, not least, “citizens.” If our cheaper products come at the expense of our living wage, or the viability of our neighborhoods, or the democratically accountable authority of our elected representatives, have we really come out ahead?

“Consumer” is a truly undignified self-conception. To be a consumer is to be a mere ambulatory wallet, “voting with your dollars” to acquire life’s comforts and necessities, without regard to the impact their production has on your neighborhood, your environment, your politics, or your kids’ futures.

Maybe you disagree. Maybe you find enormous pleasure in “retail therapy” and revel in the plethora of goods on offer, supply chains permitting. Maybe the idea of monopolists finding ways to deliver lower prices and higher quality matters more to you than the working conditions in the factories they emerged from or the character of your town’s Main Street. Maybe you think that you might secretly be a Borkist.

Sorry, you’re not a Borkist. Bork’s conception of maximizing consumer welfare by lowering prices and increasing quality may sound like a straightforward policy. When a company merges with a rival or buys up a little upstart before it can become a threat, all a regulator has to do is ask, “Did this company raise prices or lower quality, or is it likely to do so?”

That does sound like a commonsense proposition, I grant you. But for Bork—and his coconspirators at the far-right University of Chicago School of Economics—“consumer welfare” was no mere matter of watching to see whether prices rose after companies formed monopolies.

The Chicago School pointed out that sometimes prices go up for their own reasons: rises in the price of oil or other key inputs, say, or logistical snarls, foreign exchange fluctuations, or rent hikes on key facilities. Good monopolies don’t want to raise prices, they reasoned, so most of the time, when a monopolist raised their prices it would be because they were caught in a squeeze by rising costs. The last thing the government should do to these poor, beleaguered monopolists was kick them while they were down by accusing them of price gouging even as they were scrambling to get by during an oil crisis.

Of course, the Chicago Boys admitted, there were some bad monopolists out there, and they might raise prices just because they know they don’t have to worry about competitors. The Chicago School used complicated mathematical models to sort the good monopolists from the bad ones.

These models were highly abstract and really only comprehensible to acolytes of the consumer welfare cult, who would produce them on demand—for a fee. If you ran a big business and wanted to merge with your main competitor, you could pay a Chicago School economist to build a model that would prove to the regulators at the DOJ and FTC that this would result in a good monopoly.

But say that after this good merger was approved, prices went up anyway? No problem: If the DOJ came calling, you could hire a Chicago School economist who’d whip up a new model, this one proving that all the price hikes were due to “exogenous factors” and not due to your price gouging.

No matter how large a post-merger company turned out to be, no matter how bald its post-merger shenanigans were, these economic models could absolve the company of any suspicion of wrongdoing. And since only Chicago-trained economists understood these models, no one could argue with their conclusions—especially not the regulators they were designed to impress.

Despite the superficial appeal of protecting “consumer welfare,” this antitrust theory was obviously deficient: based on ahistorical fiction and contrary to the letter of the law and all the jurisprudence to date. Plus, the whole idea that the purpose of anti-monopoly law was to promote good monopolies was just … daft.

But consumer welfare won. Monopolies have always had their fans, people who think that business leaders are Great Men of History, singular visionaries who have the power to singlehandedly revolutionize society and make us all better off if we just get out of their way and let them do their thing.

That’s why they railed against “wasteful competition.” Competition is wasteful because it wastes the time of these Ayn Rand heroes who could otherwise be focusing on delivering a better future for us all.

Unsurprisingly, some of the most ardent believers in this story are already rich. They are captured by the tautology of a providential society: “If I am rich, it must be because I am brilliant. How can you tell I’m brilliant? Well, for starters, I’m rich. Having proven my wealth and brilliance, it’s clear that I should be in charge of things.”

The Chicago School had deep-pocketed backers who kept them awash in money, even though their ideas were on the fringes. Chicago School archduke Milton Friedman described the movement’s strategy:

Only a crisis—actual or perceived—produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around. That, I believe, is our basic function: to develop alternatives to existing policies, to keep them alive and available until the politically impossible becomes the politically inevitable.

Friedman believed that if the movement could simply keep plugging, eventually a crisis would occur and its ideas would move from the fringes to the margins. Friedman’s financial backers agreed, and bankrolled the movement through its decades in the wilderness.

The oil crisis of the 1970s was the movement’s opportunity. Energy shortages and inflation opened a space for a new radical politics, and around the world, a new kind of far-right leader took office: Ronald Reagan in the USA, Margaret Thatcher in the UK, Brian Mulroney in Canada.

Not all of the political revolutions of the 1970s were peaceful: Augusto Pinochet staged a coup in Chile, deposing elected president Salvador Allende, slaughtering 40,000 of his supporters, imprisoning 80,000 ordinary people in gulags, and torturing tens of thousands more. Pinochet was supported—financially, morally, and militarily—by the democratically elected right-wing leaders and the Chicago School of Economics, who sent a delegation to Chile to help oversee the transformation of the country based on their “ideas lying around.”

This global revolution marked the beginning of the neoliberal era, and with it, generations of policy that celebrated the ultrarich as the ordained leaders of our civilization.

Throughout the neoliberal era, Bork’s antitrust theories have dominated all over the world. The Chicago School’s financial backers had invested wisely: the rise and rise of Chicago economics has shifted trillions in wealth to the already wealthy, while workers’ wages have stagnated, the middle class has dwindled, and millions of residents of the world’s wealthiest countries have slipped into poverty.

Bork’s investors consolidated their gains. They sponsored economics chairs and whole economics departments and created the Manne seminars, an annual junket in Florida where federal judges were treated to luxury accommodations and “continuing education” workshops on Bork’s unhinged theories.

Forty percent of the US federal judiciary graduated from the Manne seminars, and empirical analysis of their rulings shows that they took Bork’s consumer welfare theories to heart, consistently finding that monopolies were “efficient” and that mergers should be waved through and anticompetitive conduct forgiven. Even the judges who didn’t attend a Manne seminar were captured by its influence: After more than 40 years of Borkism, any judge hearing any antitrust case is briefed with decades’ worth of precedent based on the consumer welfare theory.

Bork and his defenders assured us that decades of sustained monopolism would produce an incredible bounty for all: efficient, gigantic, vertically integrated firms offering goods and services at fantastically low prices, to the benefit of all.

They were half-right. Think back to that Open Markets Institute list of highly concentrated industries: pharmaceuticals, health insurers, appliances, athletic shoes, defense contractors, book publishing, booze, drug stores, office supplies, eyeglasses, LCD glass, glass bottles, vitamin C, car parts, bottle caps, airlines, railroads, mattresses, Lasik lasers, cowboy boots, and candy. Some of these industries did lower prices, at least some of the time. Some of the largest firms in these industries are “efficient”— in the sense of overcoming logistical challenges that would have been counted as fanciful in the days of Bork. Amazon fits both of those bills: cheap goods, delivered quickly.

But all of this comes at a price: the rise of “autocrats of trade”: unelected princelings whose unaccountable whims dictate how we live, work, learn, and play. Apple’s moderators decide which apps you can use, and if they decline to list an educational game about sweatshop labor or an app that notifies you when a US drone kills a civilian overseas, well, that’s that.

Google decides which search results we see—and which ones we don’t. If Google doesn’t prioritize a local business, it might fail—while Google’s decision to feature a rival will make it a huge success. The same goes for newspapers, blogs, and other sites—if Google downranks your newspaper, it effectively ceases to exist. During the Covid lockdowns, Google colluded with review websites and app-based delivery services to trick people who wanted to order in their dinners. Search for your local restaurant on Google and you’d get a phone number for a scammy boiler room where low-wage workers would pretend to be staff at your local eatery and take your order. Then they’d call up the restaurant (they had the real number) and place your order with them. Restaurants that tried to maintain their own delivery staff and set their own prices and terms for delivery found themselves nonconsensually opted in to predatory delivery apps.

Facebook decides which news you see—and which news you don’t. If Facebook—or Instagram, or WhatsApp—kicks you off their platform, it can cost you your artistic career, or access to customers for your small business, or contact with your distant family, or the schedule and carpool for your kid’s Little League games.

Microsoft, Airbnb, Uber, LinkedIn … the largest tech firms structure our lives in myriad ways, without regard to our well-being, without fear of competition, and largely without regulation (for now).

Of course, brick-and-mortar is subject to a high degree of concentration and exercises enormous power over our lives, too. Back when all of American bookselling was collapsing into two chains—Borders and Barnes & Noble—the buyers for those chains held every writer’s career in their hands. Just announcing that a writer’s book sales were too low to warrant inclusion on either chain’s shelves would trigger that writer’s immediate defenestration by most publishers, or, if the writer had a very supportive editor, a career reboot under a pen name.

Borders is long gone today. Barnes & Noble is struggling. There’s a scrappy independent bookselling trade, small stores run by committed booksellers who effectively take a vow of poverty to serve their communities—but they have to buy all their books from a single distributor, Ingram, who bought out their largest competitor, Baker & Taylor, in 2019. The power of two chain buyers to make or break a writer’s career has been replaced by the power of one distributor to do so.

This is a microcosm for so many industries: Walmart crushed Main Street retail, then used its monopsony power to decide which products it would carry, and on what terms, driving some companies under by refusing to carry them, and others by insisting on discounts that the desperate companies ultimately couldn’t afford.

But of course, all of this is just a sideshow compared to Amazon’s effect on bookselling and all other forms of retail. The company’s market power—captured by selling products at a loss for years, burning investor capital to force others out of business—is reinforced by its Prime program, which holds its best customers hostage to the $150/year sunk cost.

One company now has the power to wreck its workers’ bodies (in many towns, Amazon is the only major employer), choke our streets with delivery vans, and ruin small businesses by inviting counterfeits or cloning their products. That same company decides which books are sold—either by refusing to carry them, or by ranking them low on search results.

Amazon is an enormously powerful autocrat of trade. In some ways, it is unexceptional: Autocrats of trade wield similar power in the global shipping industry (controlled by just four firms), meatpacking (four firms), and baby formula (four companies).

But Amazon is different, as are Google, Apple, Microsoft, Salesforce, Uber, DoorDash, Facebook and the rest of the Big Tech stable.

They’re different for two reasons: First, because they control the means of computation. These companies rule our digital world, the place where we find one another, form communities, and mobilize in solidarity to take collective action. Winning tech back isn’t more important than preventing runaway climate change or ending gender-based violence and discrimination, but it’s hard to imagine how we’ll do either—or anything else of significance—without digital infrastructure to hold us together. Fixing tech isn’t more important than fixing everything else, but unless we fix tech, we can forget about winning any of those other fights.

Second, tech is built on digital computers and networks, those strange hyper-objects I described [in the book’s introduction]. For all that our computers take on many forms—blenders, watches, phones, cars, airplanes, thermostats—they are all, at root, functionally equivalent.

The modern computer went from a specialized instrument of giant companies, militaries, and governments to a ubiquitous, invisible part of our social fabric in just a few decades. The mechanism behind this explosive growth is intimately connected to the deep nature of computers.

Before World War II, we didn’t have computers—we had electromechanical tabulating engines, giant machines designed to solve a single kind of problem, like calculating a ballistics table or an actuarial table for an insurance company.

Turing’s breakthrough—well, one of Turing’s breakthroughs (well, one of the breakthroughs by Turing and the exiled Polish mathematicians and British boffins at Bletchley Park)—was to conceptualize and refine the universal computer: a gadget that could run any program, provided it was expressed in valid symbolic logic (a collection of “valid symbolic logic” is also known as “a computer program”).

Across the Atlantic, John von Neumann (and a collection of exiled Hungarian mathematicians, as well as assorted brilliant people at the Princeton Institute) created and built the first “von Neumann machine”—a physical instantiation of a Universal Turing Machine.

The rest, as they say, was history. The universality of the general-purpose computer was both profound and powerful. Any computer can run any program we can write, but slower computers with less memory might take a very long time to execute it. The hand-built computers assembled by von Neumann and his team (and their kids: the children of the Princeton Institute’s fellows were pressed into service over their summer breaks, made to hand-wind copper wire around insulators to form the first memory cores) might take millions of years to boot up a copy of Photoshop. But, given enough time and enough electricity—and enough maintenance—boot it up they will. Eventually.

Lots of people set out to make computers a little faster or a little cheaper, hoping to solve a problem that mattered to them or some other engineering or record-keeping challenge. Every time they succeeded, everyone else who was using computers to solve their own problems got the benefit of that breakthrough, as their own problems got faster and cheaper to solve.

What’s more, as computers got faster and cheaper, the range of problems they could cost-effectively solve expanded. Each improvement in computers added to the pool of people who were using computers and trying to improve them. A computing improvement driven by a desire to process astronomy data made it easier to model floodplains, and also to send email, and also to animate sprites in a video game.

Universality created an army of partisans, all fighting for better computers. They wanted faster, cheaper computers for different reasons, but they all wanted faster, cheaper computers. Because computers were universal, they moved into every industrial sector, every field of artistic endeavor, every form of leisure, every scientific discipline.

This collective demand for better computers justified unimaginably vast R&D expenditures, whose triumphs were faster, cheaper computers that found their way into still more corners of our lives. This is what “software is eating the world” really means: The positive externalities of computer improvements set up a virtuous cycle where improvements begat partisans for still more improvements, which created still more partisans.

Universality isn’t just a happy feature we’ve engineered into our computers: It’s an inescapable part of their nature. It would be great if we could design a computer that could only run some programs: For example, if we could invent a printer (which is just a computer hooked up to a system for spraying ink on paper) that could only run the “print documents” program and not the “be infected with a virus that metastasizes across the network and infects all the PCs on it” program.

But we can’t. Our printers are universal computers, and so are our thermostats and smart watches and cars.

All of this is to explain how computers are, indeed, exceptional. Computers are exceptional because they are universal, and that inescapable universal characteristic means that they have intrinsically low switching costs.

That may sound very highfalutin and technical. Admittedly, it requires a grasp of technical concepts from both economics and computer science. But it’s not that complicated, as you’ll see from this example of how these two principles saved Apple from destruction in the early 2000s.

Back then, Apple had a problem. Microsoft, a convicted monopolist, had a 95 percent share of the desktop operating system through its Windows product. Microsoft exploited that operating system monopoly to win a similar monopoly with productivity tools: the Microsoft Office suite consisting of Word, Excel, and PowerPoint.

Microsoft had long since clobbered all the other Windows productivity programs through dirty tricks: as the old internal company motto had it, “DOS isn’t done until Lotus won’t run.” Lotus 1-2-3 was an early spreadsheet program and a major competitor to Microsoft Excel; it was well understood that Microsoft tweaked its new operating system releases so that people who upgraded would no longer be able to launch Lotus 1-2-3 and would have to wait until a new version was released that coped with the new incompatibilities Microsoft introduced.

That meant that nearly everyone who used a computer used Windows, and nearly everyone who used Windows used Microsoft Office.

That was a big problem for Apple.

Back then, I was a kind of itinerant chief information officer, building and managing networks for small- and medium-size companies. Most of the computers I managed were PCs running Windows, but a few of my users—the designers and sometimes the CEOs—used Macs.

These Mac users’ colleagues needed to collaborate with them on word processor files, spreadsheets, and slideshows created in Microsoft Office. They’d send them over—on removable disks, or using internal email or file transfers—and the Mac users would open them using Microsoft Office for macOS.

Or at least, they’d try to open them. Office for the Mac was a terrible piece of software. Much of the time, it wouldn’t be able to read the files sent by Windows users—and vice versa, as files saved out of the Mac version of Office would be rejected by Office for Windows as “corrupted.”

This was a pain in the ass for me and my users, but it was a nightmare for Apple, because the way I dealt with this—the way thousands of IT managers like me dealt with it—was to buy new PCs for my designers and stick them on their desks next to their Macs. These PCs were dedicated Office workstations.

But that was a lot of work, and eventually those designers—and even those CEOs—agreed that two computers was one too many. I beefed up the graphics cards in the Windows users’ machines, installed Adobe Photoshop and QuarkXpress for Windows, and got rid of their Macs altogether.

Apple knew this was happening, and understood that it was a deliberate strategy. Apple founder and CEO Steve Jobs had a solution: He tasked some of his technical staff with reverse engineering Microsoft Office and creating a rival product called iWork, which consisted of three programs—Pages, Numbers, and Keynote—that could read and write the files created by Microsoft’s Word, Excel, and PowerPoint.

At that instant, everything changed. Mac users no longer had to forfeit the ability to collaborate with the Windows users, who accounted for 95 percent of the computing world, as a condition of using an Apple product. Instead, they attained seamless interoperability—their files Just Worked for Windows users, and Windows users’ files Just Worked for them.

And once iWork was in the world, Microsoft users suddenly entered a new reality: They could give up Windows and buy Macs and take all their files with them, and those files would Just Work, too.

You see, Microsoft had network effects on its side, and it used them to get big. Every Windows user was an Office user, and every Office user produced documents that other people wanted to read, edit, and collaborate on. Every Office document created was another reason to become a Windows user, another file that you could potentially learn from, edit, and improve. Every Windows user created more Office documents.

The corollary was that leaving the Windows world for a Mac was very costly indeed. Your own files would struggle to make the transition with you—many would be forever unreadable. The 95 percent of the computing world who were Windows users would struggle to collaborate with you, thanks to Microsoft’s terrible Mac software.

Even if you didn’t like Windows, even if you preferred the Mac, there were billions of reasons to stick with Microsoft’s products—billions of users, and trillions of documents. 

Microsoft used network effects to build a winner-take-all system that sucked in new users and shackled them to its platform with high switching costs. If Microsoft’s monopoly had been on some physical product—say, a proprietary lightbulb screw, or a proprietary way of connecting attachments to kitchen mixers—then that might have been the end of the story.

After all, even if you’re a skilled machinist who can make adapters to plug your proprietary mixer attachments into a rival’s mixer, that only solves your problem. Your neighbor is still stuck buying a Microsoft mixer in order to preserve their investment in attachments. But software is different. It is universal. Tech is exceptional.

The fact that all computers are universal, all capable of running every program, meant that there would always be a way to write a Mac program that could read and write Microsoft Office files better than Microsoft Office for Mac could. And once that program existed in the world, it could be given away or sold to anyone who had a Mac and an internet connection.

If you’re a Computer User of a Certain Age, you know what happened next. Apple launched a cheeky, ballsy ad campaign called “Switch,” which featured Windows users who’d ditched Microsoft and bought Macs, extolling the simplicity of reading and writing their files with iWork, praising the ease of collaborating with Windows users who hadn’t made the switch (yet). From my perspective as an IT professional, who, at the time, was writing purchase orders for millions of dollars’ worth of workplace computers every year, iWork saved Apple.

Microsoft used network effects to get big, and used high switching costs to stay big. Once Apple lowered those switching costs, the network effects no longer mattered so much—indeed, they became a double-edged sword. Every person who got stuck inside Microsoft’s walled garden was a reason for others to join—but every person who escaped that walled garden became a reason for others to leave, too.

Remember, there was nothing technical Microsoft could have done to prevent Apple from reverse engineering its files and making iWork. The deep universality of computers meant that Apple would always be able to blow a hole in Microsoft’s walled garden.

Which is not to say that Microsoft didn’t try. The old Office file formats were a notoriously gnarly hairball of obfuscation and cruft. Even Microsoft struggled to maintain compatibility with all the different versions of Office it had pushed out over the decades.

But here’s the kicker: After Apple successfully launched iWork, Microsoft gave up. It stopped obfuscating Office, and instead, took those Office file formats to a multistakeholder standardization body, and helped create an open, public standard for reading and writing Office files. Today, that standard is everywhere: Google Docs, LibreOffice, iWork, Office, and a million websites that can ingest your Office files and turn them into something that lives on the internet.

The consumer welfare standard led to industrial concentration across the board, so in that regard, tech is not exceptional. But tech is exceptional in that it is intrinsically interoperable, which means that we can use interop to make Big Tech a lot smaller, very quickly—we can attack network effects by reducing switching costs.

Making it easier for technology users—everyone—to leave Big Tech platforms for smaller tech created by co-ops, nonprofits, tinkerers, and startups will hasten the day that we can bring Big Tech to heel in other ways. Siphoning off Big Tech’s users means reducing its revenues, which are otherwise fashioned into lobbying tools. It also robs Big Tech of its partisans—for example, the small businesses who stand up for Amazon even though it is slowly destroying them, because they can’t reach their customers without it. If they could leave Amazon but still reach its customers, they’d stop telling lawmakers to leave poor Amazon alone.

Finally, it will trigger an exodus of Big Tech’s most valuable resource: its technical workforce. Techies are in demand and can exploit that bargaining power to command high wages and perks, but Big Tech’s takeover has substantially dampened techies’ dreams of starting their own rival businesses where they don’t have to answer to a manager. Interop means that you can quit your awful Facebook job and create a rival service that plugs into Facebook, providing a nonexploitative alternative for the users you signed on to help. 

Starved of cash, allies, and engineers, Big Tech will be a soft target for other, stiffer forms of regulation, like breakups. Starved of cash, Big Tech will struggle to buy up the small competitors that could someday grow to pose a threat to them. Tech is exceptional because digital computers and networks are universal, and universal things are interoperable, and interoperability lowers switching costs. Monopoly is an elephant, and you eat an elephant one bite at a time. For all these reasons, I think tech should be antitrust’s first bite.

But that’s not the only way that tech is exceptional. Our digital tools aren’t just how corporations and governments surveil and control us—they’re also how we form communities and coordinate our tactics to fight back.

If we someday triumph over labor exploitation, gender discrimination and violence, colonialism, and racism, and snatch a habitable planet from the jaws of extractive capitalism, it will be thanks to technologically enabled organizing. From street protests to mutual aid funds, from letter-writing to organizing sit-ins, from blockades to strikes, we need digital networks to prosecute our struggle.

That is the other way that tech is exceptional. The fight for a free, fair, and open digital future isn’t more important than any of those other fights, but it is foundational. Tech is the terrain on which our future fights will be fought. If we can’t seize the means of computation, we will lose the fight before it is even joined.

Excerpt adapted from The Internet Con: How to Seize the Means of Computation, by Cory Doctorow. Published by arrangement with Verso Books. Copyright © 2023 by Cory Doctorow.