Stevie Wonder’s Rule for AI at CES 2026—‘Make Life Better for the Living’NEWS | 12 January 2026Stevie Wonder performs onstage on the third day of the Democratic National Convention at the United Center in Chicago on August 21, 2024.
At CES 2026, Stevie Wonder offered a simple test for tech. And in the smart glasses boom, the most persuasive tools aren’t about perfect sight but day-to-day independence.
Of all the nonstop talk about artificial intelligence at CES this year, the most useful thing I heard came from Stevie Wonder.
I spotted him moving through the expo floor—handlers tight by his side, fans threading in and out—and sidled up long enough to ask a few questions. Wonder isn’t new to this world. He’s always treated technology as part of his craft—as something to be shaped, tested and tuned. Long before AI became an unavoidable buzzword, he worked with synth pioneers on the sounds that defined songs like “Superstition” and “Living for the City.” He’s been attending CES for more than a decade.
Wonder is working on his first album in more than 20 years, so I asked what he made of AI in the creative process. He did not equivocate. “I will not let my music be programmed,” he told me. “I’m not going to use it to do me and do the music I’ve done.” He wasn’t rejecting technology. He was protecting what he considers human territory. “We can go on and on talking about technology,” he said. But he was concerned with a different question. “Let’s see how you make things better for people in their lives—not to emulate life but to make life better for the living.”
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Among the health-tech exhibitors, a common theme emerged: the always-on AI companion, one that can help make care decisions, locate services and navigate daily life. Dominic King, vice president of health at Microsoft AI, told me people already use Copilot and Bing to ask roughly 50 million health-related questions every day.
Yet the promise felt realest only in smaller tools with clearer stakes—especially the ones built for people who are blind or have limited vision. With accessibility tech, both the problem and the upside felt obvious.
After a few hours on the floor, a pattern emerged. Some of the most compelling accessibility tech didn’t try to fix vision so much as translate the visual world into something usable. EchoVision, a pair of smart glasses from California-based AGIGA—developed with input from Wonder—let a wearer point their head toward a sign, a doorway or another object and hear a description about it. In a hall full of gadgets that felt like solutions in search of problems, narration that eases a person’s day made good sense.
But description doesn’t always solve the full problem.
“I’m not so sure it does you much good to know that in this direction is where the restrooms are,” a representative from Seattle-based Glidance told me, “if you don’t already have the navigation skills to dodge all the people in the way.” The world isn’t just a picture frozen in time. It’s movement. It’s crowds. It’s columns, curbs, chaos.
Glidance’s answer was Glide, a two-wheeled device that would roll along in front of you with a grip attached, sort of like a handlebar on wheels. Stereo cameras spotted obstacles and hazards. The device then steered and braked to help keep you moving in the direction you wanted to go.
Glidance kept the guide in your hand; .lumen put it on your forehead. The Romanian start-up’s founder, Cornel Amariei, described his glasses as “a self-driving car that sits on your head.” At CES, the company won an accessibility award in a pitch competition for assistive-tech start-ups that came with an oversize $10,000 check. (“Now we have money for the return tickets,” Amariei said.)
Many CES demos relied on bulky sensor rigs. But .lumen kept the hardware of its glasses simple and tried to do the rest with software. Six cameras create stereoscopic vision—depth perception built from slightly different angles, the way two eyes triangulate a curb. And the team made a key design choice: the glasses don’t require an Internet connection. All the compute is in the device itself.
Amariei explained that geometry alone isn’t enough. A lake is perfectly flat. A system that only understands “flat” will steer you right into it. The harder part is recognizing safe surfaces from dangerous ones—then translating that into something your body can use. When .lumen’s glasses find a clear route, they don’t announce directions one step at a time. They guide you there with haptics, nudging your head toward the open path.
All the sensor talk and the demos were fascinating, but the human payoff is what has stayed with me. These tools aim to let someone move through a lobby, down a sidewalk, through a crowded hall, without having to stop and reassess every few feet.
The best accessibility tech I saw at CES pushed back against the show’s most annoying habit: making sweeping promises when what people need are reliable, specific tools. Some of these devices will cost a lot. Some will take longer to mature than their demos suggested. Some will stumble in the real world. But they point in a direction that Stevie Wonder would recognize: tools that make life better for the living.Author: Seth Fletcher. Eric Sullivan. Source