The Despots of Silicon Valley

The intellectual origins of the movement that self-described “techno-optimists” are advancing is dark—and deeply familiar.

Next to the Radio Atlantic logo, Mark Zuckerberg wearing a crown of mouse pointers
Illustration by Ben Kothe. Source: Gerard Julien / AFP / Getty.

On the day that Elon Musk bought Twitter for $44 billion, he tweeted, “the bird is freed,” a very short phrase, even by the standards of Twitter (now X). And yet it contains so many innuendos and unanswered questions. Was the bird shackled before? Is the man who freed it … a liberator? Freed to do what exactly?

Musk has always talked a big game on free speech and even described himself as a “free speech absolutist.” But his ownership and management of X has revealed the deep inconsistencies between his professed values and his actions. And it isn’t just Musk. Throughout the world of tech, evidence of illiberalism is on the rise.

In this week’s episode of Radio Atlantic, Adrienne LaFrance, the executive editor of The Atlantic, names and explains the political ideology of the unelected leaders of Silicon Valley. They are leading a movement she calls “techno-authoritarianism.”

Listen to the conversation here:


The following is a transcript of the episode:

[Music]

News Archival: This is the big surprise in Silicon Valley today. Sam Altman, the face of the generative-AI boom and CEO of OpenAI, he’s out of the company.

Adrienne LaFrance: You probably remember seeing headlines right before Thanksgiving about a bunch of drama at OpenAI.

News Archival: That roller-coaster ride at OpenAI is over. At least we think it’s over. Ousted CEO Sam Altman has been rehired, and the board that pushed him out is gone.

LaFrance: I mean, it was probably the most dramatic story in tech, possibly of this century. I mean, really dramatic.

[Music]

Hanna Rosin: That’s Adrienne LaFrance, the executive editor of The Atlantic, and she’s been following tech for decades. So you would expect that she would find this Silicon Valley office gossip dramatic.

But the surprising thing is, a lot of people did—which is probably because underneath that “will they or won’t they rehire Sam Altman?” question, there was a more fundamental debate going on.

[Music]

LaFrance: On one side, you have people arguing for a more cautious approach to development of artificial intelligence. And on the other, you have an argument or a sort of worldview that says, This technology is here. It’s happening. It’s changing the world already. Not only should we not slow down, but it would be irresponsible to slow down.

So it’s this just dramatically different worldview of, you know, almost polar opposites—of if you slow down, you’re hurting humanity, versus if you don’t slow down, you’re hurting humanity.

Rosin: So the most oversimplification is like scale and profit versus caution.

LaFrance: Exactly. But the people who are on the scale-and-profit side would like you to believe that they are also operating in humanity’s best interest.

[Music]

Rosin: This is Radio Atlantic. I’m Hanna Rosin. The drama at OpenAI was a rare moment where an ideological divide in Silicon Valley was so central, and explicit.

We’re not going to talk about the Sam Altman saga today. But we are going to talk about these underlying beliefs, because in an industry defined by inventions, and IPOs, and tech bro jokes, it’s easy to miss what a fundamental driver ideology can be.

In a recent story for The Atlantic, Adrienne argued that we should examine these views more carefully and take them much more seriously than we do. And she put a name to the ideology: techno-authoritarianism.

[Music]

Rosin: So, we are used to thinking of some tech titans as villains, but you’re kind of defining them as villains with political significance. What do you mean when you call them the despots of Silicon Valley?

LaFrance: So I’ve been thinking about this for years, honestly, and something that had been frustrating me is I feel that we, as a society, haven’t properly placed Silicon Valley where it needs to be, in terms of its actual importance and influence.

So we all know it influences our lives. And, you know, I would love to talk about screens and social media and all the rest, but Silicon Valley has also had this profound influence politically and culturally that is much bigger than just the devices we’re holding in our pockets.

Rosin: Mm-hmm.

LaFrance: It has bothered me because I feel like we haven’t properly called that what it is, which is an actual ideology that comes out of Silicon Valley that is political in nature, even if it’s not a political party.

It’s this worldview that is illiberal. It goes against democratic values, meaning not the Democratic Party but values that promote democracy and the health of democracy. And it presupposes that the technocratic elite knows best and not the people.

Rosin: I mean, authoritarian is a very strong word. We’re used to using “authoritarian” in a different context, which is our political context.

LaFrance: Definitely. I mean, I guess the nuance I would want to add is that this is not political in the traditional sense. It’s not as though you have authoritarian technocrats trying to come to power in Silicon Valley by way of elections or coups, even. They’re not even bothering with our systems of government, because they already have positioned themselves as more important and influential, culturally. And so it’s almost like they don’t need to bother with government for their power.

Rosin: I see. So it’s a form of power we don’t even recognize, because we don’t exactly have structures to put it in or understand it.

LaFrance: Well, we may not recognize it as readily because of that, but I think if you look not even that closely, it’s pretty plain to see.

If you just pay attention to how people talk about what they think matters, who they think should make decisions, who they characterize as their enemies—institutions, experts, journalists, for example. You know, if it looks like an authoritarian and quacks like an authoritarian, then, you know: ta-da.

Rosin: Right. Right.

LaFrance: The reason I wanted to try to define what this ideology is, is I do feel as though over the past five to 10 years, something has shifted, gradually at first and then more quickly. The sort of subversion of Enlightenment-era language and values to justify an authoritarian technocratic worldview was alarming to me. And so, for example, you’ll see a lot of people in this category describing themselves as free-speech absolutists—I think a really easy example of this would be Elon Musk—and saying all the things that someone who believes in liberal democracy might agree with on its face, but then acting another way.

So, to say you’re a free-speech absolutist but then tailor your privately run social platform to serve your own needs and beliefs and pick on your perceived enemies—I mean, that’s not free-speech absolutism at all.

And so this sense of aggrievement has accelerated and become, you know, more vitriolic and more ostentatious. It just seems like it’s getting more pronounced.

Rosin: When did you start paying attention to tech titans? When did you start following the industry?

LaFrance: I first started really writing about tech for The Atlantic in 2013.

Rosin: Mm-hmm. What was the promise of tech back then? How were tech titans framing their own work or behaving differently than now?

LaFrance: Right. I mean, so 10 to 15 years ago, we were talking about the dawn of the social mobile age. So smartphones are still sort of new. (Social media is not totally new. You know, Facebook started in 2004. You could go back to, like, Friendster or Myspace before that.) Uber was new. It was very much an era of people still being wowed.

And frankly, I’m still wowed by this. Like, you pick up this smartphone—this new, shiny, beautiful device—and you press a button on the phone, and something can happen in the real world: You summon a taxi or, you know, food delivery. All of this stuff seems totally normal to us now, but it was this miraculous time where people were creating a way of interacting with the world that was totally new.

And so there was still, I think, certainly healthy skepticism, but you had a lot of the bright-eyed optimism that I think started, certainly, in the ’90s that still carried over.

Rosin: And was there a worldview attached to that awe? Like, I remember the phrase, “Don’t be evil,” but I can’t place it in time. Like, was there some idea of—

LaFrance: I can’t remember when Google retired that, but there certainly came a point where it became ridiculous to wear that optimism on your sleeve. There was this time where Silicon Valley was a place for underdogs, for people with big dreams and the ability to code, and they’d come and do amazing things.

I think we also have to remember—I don’t want to be too starry-eyed about it—even then, this was an era where you had, like, the bro-ish culture, and women working in a lot of these companies at the time report just terrible experiences.

And so there are flaws from the start, as with any industry or any culture. But I think 10 or 15 years ago is around the time things started to curdle a little bit.

I believe it was 2012 when Facebook eventually bought Instagram, with its $1 billion valuation, and it was this moment where people were like, No. Come on. Like, That’s an insane amount of money. Is it really worth that?

And you had a string of these sort of obscene amounts of money. And what you were witnessing—and I think people started to realize this then, too—was like the monopolization and these giants gobbling up their competitors. The forces set in motion that led us to the environment that we’re in today.

And we didn’t know it until a couple of years later, but 2012 was also the year that Facebook was doing its now-infamous mood-manipulation experiments, when it was showing users different things to see if they could try to make people happy or sad or angry, without their consent.

And by then, I think, the general public was starting to realize, you know, there may be some downsides to all these shiny things.

Rosin: Yeah. And then came 2016, when it felt like Facebook’s role in the election was something everybody noticed.

LaFrance: Right, and all kinds of questions about targeted ads for certain populations and election interference, foreign or otherwise. So, definitely, there was another wave of intense criticism for Facebook then. You know, serious questions about these companies wreaking havoc have been around for years.

Rosin: It’s been eight or 10 years. So, like, what feels different now?

LaFrance: The reason I wrote this now is we are, in America, certainly, and elsewhere in the world, facing a real fight for the future of democracy. And the stakes are high. And it seemed important to me, you know, at a time when everyone’s going to be focused on the 2024 presidential election, as they should be, and on the stakes there, there are other forces for illiberalism and autocracy that are permeating our society, and we should reckon with those too.

[Music]

Rosin: After the break, we talk about a voice that seems to capture techno-authoritarianism perfectly. And of course, we reckon.

[Music]

LaFrance: So if you look at the social conditions that help provoke political violence or stoke people’s appetite for a strong man in charge—or pick your destabilizing social, political, cultural force—a lot of those things go together and overlap.

And I think a lot of those same conditions are exacerbated by our relationship, individual and societal, with technology and then further exacerbated by the tech titans who want to defend against any criticism of the current environment.

Rosin: I see. So the tolerance for political autocracy and our tolerance for technological autocracy, they kind of meld together and produce the same results.

LaFrance: I think so. I mean, just think about, like, Orwell or Ray Bradbury. We know that—I mean, those were futuristic books in their times, thinking of 1984 and Fahrenheit 451

Rosin: Politics and technology, the interaction between—that is the engine of sci-fi.

LaFrance: Yeah, or just look at how Hitler used the radio. Like, technology is not inherently evil. I love technology. I desperately love the internet. Like, I actually do.

But I think when you have extraordinarily powerful people putting their worldview in terms of, Progress is inevitable, and, Anyone who doesn’t want to just move forward for the sake of moving forward is on the side of evil, just the starkness of how they frame this is so uncomplicated.

There’s no nuance, and it’s in really authoritarian terms. Just, it should scare people.

Rosin: Okay. I think I want to get to the specifics of what this ideology actually is.

LaFrance: Okay. So a useful example is Marc Andreessen, the venture capitalist. You know, Andreessen Horowitz is his firm. He’s a very well-known, influential, but pugnacious guy.

And he has written what he calls “The Techno-Optimist Manifesto,” and it’s a very long blog post, but I think a revealing one and worth reading in the sense that it lays out some of what I’m describing here.

He lists, sort of, what a techno-optimist would believe, and I’m paraphrasing here, but: progress for progress’s sake, always moving forward, rejecting tradition, rejecting history, rejecting expertise, rejecting institutions. He has a list of enemies.

You look at the well that people are drawing from, and it gives you a sense of the sort of the intellectual rigidity, I would say, of just: What we’re doing is good because it’s what we’re doing, and we’re going to do it because we’re doing it.

There’s sort of this, like, circular logic to it. So anyway, that’s one example, “The Techno-Optimist Manifesto.”

Rosin: Let’s read some of the lines, just to typify his style of writing and thinking. I mean, the one that I always think about, is the one about the lightning.

LaFrance: It’s really dramatic. [Laughs.]

Rosin: I remember thinking, when I read that line, I’ve never possibly read anything as arrogant as this.

LaFrance: I know, well, but we shouldn’t laugh at it, because he’s serious. Do you know what I mean?

Rosin: Well, let’s get to that, but just so people understand the style—

LaFrance: Okay. [Demonstratively clears throat.] I’m really not going to laugh. Okay. He says, “We believe in nature, but we also believe in overcoming nature. We are not primitives, cowering in fear of the lightning bolt. We are the apex predator; the lightning works for us.”

Rosin: “The lightning works for us.” Wow. That’s something.

LaFrance: Look, I get, like, there’s a version of this that, harnessed properly, could inspire people to do spectacular things.

And, like, there’s something beautiful about great imagination in tech. That’s great. But yeah, saying, “The lightning works for us,” is a bit much.

Rosin: I actually have trouble understanding optimist versus pessimist.

LaFrance: Right. He’s so mad for an optimist.

Rosin: Yes. It’s a combination of, sort of, Ayn Rand speak and a kind of angry Twitter manifesto.

But it is dark and apocalyptic, and I did wonder about that. Like, why is it called “The Techno-Optimist” and yet it feels extremely reactionary? Like, it echoes a kind of reactionary language that you hear in some corners of the Republican Party and Trumpism. It’s a little bit like “Make America great again.” The way people talked about that is the most pessimistic political slogan that anyone’s ever won with.

LaFrance: Totally. I mean, I think you’re hitting the exact point, which is they take—I’ll speak just for Andreessen here. He is describing himself and this manifesto as optimistic, but in the same way that some technocrats take Enlightenment values and claim to support them while saying the exact opposite of what those values actually mean. And so I think it’s a subversion of meaning. It’s: We’re optimists. We’re the good guys.

And then you read it and you’re like, This is horrifying.

But this isn’t some Reddit forum in the corner that only six guys are reading and agreeing with each other. These are the most powerful people on the planet, and they’re hugely influential and people buy into it.

Rosin: Could you help me understand, what’s the ultimate goal of a techno-optimist? Is it social change? Is it an attitude shift? Is it money?

It’s very hard to understand. Is it just scaling a company? Or is it cultural, societal change? Like, what do you think they’re after?

LaFrance: Well, I wouldn’t call it a techno-optimist. I wouldn’t use that term.

But the worldview that’s being expressed here, I think the goal, certainly, is to retain power and to maximize profit. And one of the stated goals from the manifesto is quote, “to ensure the techno-capital upward spiral continues forever.” So that’s clearly talking about continued enrichment for these powerful people, who are already very wealthy. But you know, they want to build new things and make a ton of money.

Rosin: Mm-hmm. That’s the weird thing. Like, it doesn’t sound like a business strategy.

LaFrance: [Laughs.]

Rosin: It sounds like a manifesto for social overhaul. And so it’s hard to understand what it is.

LaFrance: I will say, to be fair, I think this encapsulates also the people who are creating world-changing tech for good, which is happening.

I mean, if you look at even the realm of AI, we hope—we haven’t seen it yet, but I fully expect we will see AI that helps cure diseases. That’s remarkable. We should all wish for that outcome. And I hope that the people working on this are singularly focused on that kind of work.

And so I think if you were to ask someone like Marc Andreessen or Elon Musk or pick your favorite technocrat, they would say, We’re changing the world to make it better for humanity. We’re going to go to Mars. We’re going to cure disease.

And people who have this worldview may, in fact, help do that, which is fantastic. But in order to get to Mars, what’s the trade-off if you’re talking about this worldview?

Rosin: Mm-hmm. Mm-hmm. Among the leaders of major tech companies, how prevalent do you think this attitude is?

LaFrance: It’s a really good question. Honestly, there are so many tech companies, I don’t feel comfortable saying that it’s widespread across every one. Like, there’s so many tech companies, right?

Rosin: Mm-hmm.

LaFrance: But it is highly visible and vocal among many very influential leaders in tech. So if you were to look at every single tech company, it may not even be a majority. But among the most powerful people, it’s highly visible and prevalent.

Rosin: And how would you compare it with the attitudes of, say, the robber barons of earlier eras?

LaFrance: There is actually a great book called Railroaded by the Stanford historian Richard White that is mostly about robber barons, but the entire time I was reading it, I was thinking about Silicon Valley, because it’s a very natural comparison.

You have this sort of, you know, world-changing technology that is rapidly enriching this handful of powerful men—mostly men—and this question of, you know, Did railroads change America for good? Certainly. Of course, they did. But, there are questions of monopoly and how much power any one person should hold and all the questions that come up with Silicon Valley, too.

So, I think there are similarities there, but I think there are limitations to the similarities, in part because I think the way that the current tech advancements are changing our lives are happening on a global scale much faster. And I don’t think a lot of people properly understand that, like, we’re still just at the beginning of figuring out what smartphones and the internet have done to us, and AI is here now. And it’s like the degree to which the world is changing is, I think, so much bigger in magnitude than it is possible to understand on a normal human timeline.

It’s wild. It’s way bigger than railroads.

Rosin: I guess the consequences of things happening faster, in a more kind of chaotic way, is that society doesn’t have time to catch up, and so you end up being more reactionary. I mean, that’s generally what happens, historically. And governments don’t have time to catch up, because they’re slow-moving, so the normal, slow regulations that you would put in place, there just isn’t time to, sort of, agree on them.

So, you’ve mentioned historical roots of fascism. What do you mean by that?

LaFrance: Well, I want to be careful, because it’s invoked casually and lazily. And I’m not calling technocrats of today fascists, just to be totally clear. And I don’t think, for instance, that “The Techno-Optimist Manifesto” is a fascist document, and I don’t think that it is expounding a fascist worldview. But, if you look at the intellectual origins of some of the ideas that this manifesto contains, you clearly will be reminded of the beginnings of fascism.

So, if you look to the 1930s, which is when the sort of first American technocracy movement flourished, it was happening at a time when there was this push for modernism, in poetry primarily but also art, and that had found its footing among futurists in Italy as well.

And so one figure, in particular, comes to mind and is someone who Andreessen happens to mention as one of his patron saints of techno-optimism.

It’s a man named F. T. Marinetti, who is often described as the father of futurism. He writes the [Manifesto of Futurism], which is worth reading, totally. I agree with Andreessen on that front. And it professes this worldview of thinking about humans’ place in the machine age and everything speeding up and the beauty of the future and the possibility. It is very techno-optimistic in nature.

And, 10 years later, Marinetti then writes The Fascist Manifesto, and many of the figures who led the futurism movement in Italy helped start fascism in Italy.

You know, I’m not suggesting that this turns into fascism, but I am pointing to evidence that shows that a movement that’s cultural can have huge political power when stoked by the right social conditions and charismatic leaders.

Rosin: Right, so a movement that has its origins in creativity and imagination can eventually kind of curdle be manipulated into a movement where the people who have the creativity and imagination are (a) virtuous and (b) the natural leaders of a society, and then that can turn and become fascistic.

LaFrance: Right. And just because someone says they’re devoting their life or their worldview to progress—it sounds good. Or they’re optimists—it sounds good. It doesn’t mean it is good.

Rosin: Right.

Okay. You’ve mentioned there are good things about technology and, to be sure, there are wonderful things happening. So I want to try and piece some of this apart.

Andreessen talks a lot about AI risk. Like, it’s developed into a cult. All we ever talk about is AI risk. I think there is a legitimate critique that both political parties often only see technology’s dangers and, in fact, don’t rely enough on or encourage enough technological fixes to obvious problems that exist. What do you think about that?

LaFrance: I love technology and agree with you, just to be clear.

Rosin: I mean, like, I was thinking, climate change is an example. Like, a lot of people would say we were too risk averse. We didn’t—by we, I mean the American government—create regulations to encourage technological innovation.

We’re sort of slow on that. We have Build Back Better, and it’s like we have these sort of old, infrastructure-y ideas about how to improve society, when we could have encouraged a lot more technological innovation.

LaFrance: Right. Well, one thing I think about a lot is Sam Altman, the CEO of OpenAI, something he said to our colleague Ross Andersen last spring, which was, you know, In a functional society, government would be doing this work that I’m doing at OpenAI, but we don’t have a functional government. Basically, I’m paraphrasing what he said.

You know, I think this should be the work as it was in, you know, the mid-20th century, that the great scientific innovation should be happening with public funding, with whatever degree of regulation is deemed necessary by the public.

Rosin: Like an NIH equivalent, basically.

LaFrance: Yeah, or even at a university. Like, universities aren’t leading the way here, which is tragic, I think.

Rosin: I imagine if I had asked you this 10 years ago, like, What government regulation should be in place? What kind of controls would you have wanted? you might’ve said none, because you just got your new smartphone, or something. How do you think about this now?

LaFrance: I’m reluctant to talk about regulation for two reasons: (1) It’s so boring, Hanna—come on—and (2) I really am not an expert on regulation, and I will confess that I don’t think it’s the swiftest path to the solution that’s best for the people, in many cases.

Rosin: Mm-hmm.

LaFrance: I’m not, like, blanket against regulation. Like, I see all the many ways where it’s—

Rosin: I mean, I think with regulation, it’s like good regulation is good; bad regulation is obstructive. Like, it’s a case-by-case basis.

LaFrance: One hundred percent. And the other thing is, though, that we do have some test cases. Like, if you look at some of the European (or elsewhere) regulation of social media, for example, it’s not, I don’t think, necessarily good for the public’s right to know things or free speech, for example. Like, a lot of what’s happened in other countries would never withstand scrutiny in terms of free-speech challenges.

Rosin: So what’s the ultimate value you’re trying to preserve?

LaFrance: You know, maybe this is me being an optimist, but I think we have more power and agency to change the culture than it sometimes feels like. And certainly there are areas where it’s really hard and norms are already established but, like, we should live in a world where we control how we use tech and how tech is used against us.

And there’s a lot of lip service paid to that idea, of course, by people in Silicon Valley, who are making a ton of money off holding people captive to their devices. But if enough smart people take ownership of this as a thing to solve and, like, are leaders in their communities, in their families, in their workplaces—wherever it may be—I have way more faith in individual people than the government to regulate, although regulation is probably also important.

Rosin: It’s interesting. We always wind up in this place. It just came to mind, this image: I met a very lovely 19-year-old who has a flip phone, and then another one who has a flip phone. But I’m thinking that, like, adorable kid with his flip phone has to somehow—

LaFrance: It was like hipsters buying record players in 2010.

Rosin: Exactly like that, and they have to somehow hold the barrier against Meta. It’s this young kid versus Meta.

LaFrance: Well, I mean, the power of teenagers declaring what’s cool is actually a hugely influential cultural force. So, like, teens, if you’re listening, save us.

Rosin: Yeah. Go buy some flip phones. All right. Okay. Well, we’ll have to dedicate another episode to actual solutions.

LaFrance: Perfect. I’ll be there.

Rosin: All right. Thank you for joining us.

LaFrance: Thanks so much for having me.

[Music]

Rosin: This episode of Radio Atlantic was produced by Kevin Townsend. It was edited by Claudine Ebeid, fact-checked by Yvonne Kim, and engineered by Rob Smierciak. Claudine Ebeid is the executive producer for Atlantic Audio, and Andrea Valdez is our managing editor. I’m Hanna Rosin. Thank you for listening.

[Music]

Rosin: Techno-authoritarianism. Boy, that’s a mouthful. [Inflects pronunciation.] Techno-authori-tarianism. Techno-authoritarianism. Techno-authoritarianism.

Hanna Rosin is a senior editor at The Atlantic and the host of Radio Atlantic.