How AI copilots became everyday infrastructure
NEWS | 20 February 2026
From the exam room to the classroom, artificial intelligence is no longer just a tool—it's infrastructure. An introduction to our special report on life in the age of AI. I agree my information will be processed in accordance with the Scientific American and Springer Nature Limited Privacy Policy . We leverage third party services to both verify and deliver email. By providing your email address, you also consent to having the email address shared with third parties for those purposes. In San Diego, a high school English teacher can clear her grading queue in a matter of days by outsourcing her initial assessments to ChatGPT. In New Hampshire, middle schoolers use generative tools to strip the clothes off their classmates in digital photographs, leaving the community grasping for a policy response. In Sweden, a payments company touts its AI customer-service system for carrying the work of 700 people—only for its CEO to later admit they’d overdone it on automation and would start bringing people back. Artificial intelligence—computer systems trained on vast datasets to predict the next likely pixel or word—is everywhere. In the three years since ChatGPT was released, AI has shifted from a browser-based novelty to a kind of background infrastructure. It is the ears in the exam room, the silent partner in the C-suite, the uncredited co-author of the classroom rubric. The College Board reports that 84 percent of high school students now use AI for schoolwork. For bosses and boardrooms, its promise of cheap labor is irresistible; spending on AI hit $1.8 trillion last year, according to research firm Gartner. There are environmental costs, too: a single AI-focused data center can consume as much electricity as 100,000 homes, and even bigger centers are under construction. The cloud, it turns out, is heavy. The advent of AI is often framed as a battle of human versus machine, but that view misses the point. The reality today is human plus machine, operating under budget constraints in flawed institutions, fed by imperfect data. While companies race to generate more and more sophisticated models and aspire to AI that can rival human intelligence, it’s the mundane uses of the technology that are making the biggest impact. A clinician might offload the drudgery of documentation to an ambient scribe, allowing her to look her patient in the eye rather than at a bedside monitor. A call center can answer in 35 languages at 3 A.M. without an army of night-shift polyglots. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. The risk, though, is that the harms will scale faster than the benefits. Deepfakes turn the technology into a personal weapon: a manipulated video can ruin a reputation long before anyone can prove where it came from. A hallucinated fact can be a minor nuisance in a school assignment or a dangerous claim in a clinical note. And even when no one means harm, the partnership changes how people make judgments. Outputs arrive with the unearned confidence of a carefully considered thought. An AI “copilot” redistributes labor—and liability. What gets sold as assistance often turns into supervision. It offers the gift of speed while multiplying the number of moments when a human must decide whether to trust the system’s suggestion. The articles in this special report track this transformation across key fronts: in hospitals struggling to modernize care without eroding it, in communities discovering how fast a synthetic video clip can outrun correction, and in the working lives of people using these tools—sometimes to speed up the day, sometimes to outsource responsibility. When a technology’s upsides are easy to claim and its downsides easy to deny, who pays for its mistakes?
Author: Seth Fletcher. Eric Sullivan.
Source