Sam Altman Is Losing His Grip on Humanity
NEWS | 24 February 2026
Last Friday, onstage at a major AI summit in India, Sam Altman wanted to address what he called an “unfair” criticism. The OpenAI CEO was asked by a reporter from The Indian Express about the natural resources required to train and run generative-AI models. Altman immediately pushed back. Chatbots do require a lot of power, yes, but have you thought about all of the resources demanded by human beings across our evolutionary history? “It also takes a lot of energy to train a human,” Altman told a packed pavilion. “It takes, like, 20 years of life and all of the food you eat during that time before you get smart. And not only that, it took, like, the very widespread evolution of the hundred billion people that have ever lived and learned not to get eaten by predators and learned how to, like, figure out science and whatever to produce you, and then you took whatever, you know, you took.” He continued: “The fair comparison is, if you ask ChatGPT a question, how much energy does it take once its model is trained to answer that question, versus a human? And probably, AI has already caught up on an energy-efficiency basis, measured that way.” Altman’s comments are easy to pick apart. The energy used by the brain is significantly less than even efficient frontier models for simple queries, not to mention the laptops and smartphones people use to prompt AI models. It is true that people have to consume actual sustenance before they “get smart,” though this is also a helpful bit of redirection on Altman’s part—the real concern with AI is not really the resources it demands, but the amount it contributes to climate change. Atmospheric carbon dioxide is at levels not seen in million of years—it has been driven not by the evolution of the 117 billion people and all of the other critters to have ever existed in the course of evolution, but by contemporary human society and combustion turbines akin to those OpenAI is setting up at its Stargate data centers. Other data centers, too, are building private, gas-fired power plants—which collectively will likely be capable of generating enough electricity for, and emitting as much greenhouse-gas emissions as, dozens of major American cities—or extending the life of coal plants. (OpenAI, which has a corporate partnership with the business side of this magazine, did not respond to a request for comment when I reached out to ask about Altman’s remarks.) Read: Every time you post to Instagram, you’re turning on a lightbulb forever But what’s really significant about Altman’s words is that he thought to compare chatbots to humans at all. Doing so suggests that he views people and machines on equal terms. He didn’t fumble his words; this is a common, calculated position within the AI industry. Altman made an almost identical statement to Forbes India at the same AI summit. And a week ago, Dario Amodei—the CEO of Anthropic, and Altman’s chief rival—made a similar analogy, likening the training of AI models to human evolution and day-to-day learning. The mindset trickles down to product development. Anthropic is studying whether its chatbot, Claude, is conscious or can feel “distress,” and allows Claude to cut off “persistently harmful or abusive” conversations in which there are “risks to model welfare”—explicitly anthropomorphizing a program that does not eat, drink, or have any will of its own. AI firms are convinced either that their products really are comparable to humans or that this is good marketing. Both options are alarming. A genuine belief that they are building a higher power, perhaps even a god—Altman, in the same appearance, said that he thinks superintelligence is just a few years away—might easily justify treating humans and the planet as collateral damage. Altman also said, in his response to concerns about energy consumption, that the problem is real because “the world is now using so much AI”—and so societies must “move towards nuclear, or wind and solar, very quickly.” Another option would be for the AI industry to wait. Read: Do you feel the AGI yet? If Altman’s comparison of chatbots and people is purely a PR tactic, it is a deeply misanthropic one. He is speaking to investors. The notion that AI labs are building digital life has always been convenient to their myth, of course, and OpenAI is reportedly in the middle of a fundraising round that would value the company at more than $800 billion—nearly as much as Walmart. Tech companies may genuinely want to develop AI tools for the benefit of all humanity, to echo OpenAI’s founding mission, and genuinely believe that they need to raise amounts of cash to do so. But to liken raising a child—or, for that matter, the evolution of Homo sapiens—to developing algorithmic products makes very clear that the industry has lost touch, if it ever had any, with what it means to be human. To “train a human”—that is, to live a life—is to struggle, to accept the possibility of failure, and to sometimes meander simply in search of wonder and beauty. Generative AI is all about cutting out that process and making any pursuit as instant, efficient, and effortless as possible. These tools may serve us. But to put them on the same plane as organic life is sad.
Author: Matteo Wong.
Source