Meet Your Future Robot Servants, Caregivers and ExplorersNEWS | 18 December 2025In the future, a caregiving machine might gently lift an elderly person out of bed in the morning and help them get dressed. A cleaning bot could trundle through a child’s room, picking up scattered objects, depositing toys on shelves and tucking away dirty laundry. And in a factory, mechanical hands may assemble a next-generation smartphone from its first fragile component to the finishing touch.
These are glimpses of a possible time when humans and robots will live and work side by side. Some of these machines already exist as prototypes, and some are still theoretical. In situations where people experience friction, inconvenience or wasted effort, engineers see opportunity—for robots to perform chores, do tasks we are unable to do or go places where we cannot.
Realizing such a future poses immense difficulties, however, not the least of which is us. Human beings are wild and unpredictable. Robots, beholden as they are to the rules of their programming, do not handle chaos well. Any robot collaborating or even coexisting with humans must be flexible. It must navigate messes and handle sudden changes in the environment. It must operate safely around excitable small children or delicate older people. Its limbs or manipulators must be sturdy, dexterous and attached to a stable body chassis that provides a source of power. And to truly become a part of our daily lives, these mechanical helpers will need to be affordable. All told, it’s a steep challenge.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
But not necessarily an insurmountable one. To see how close we’re getting to this vision, I visit the Stanford Robotics Center, which has 3,000 square feet for experiments and opened in November 2024 at Stanford University. There I am greeted by Steve Cousins, the center’s executive director and founder of the company now known as Relay Robotics, which supplies delivery robots to hospitals and hotels. He believes robots will become indispensable to modern life—especially in areas such as caregiving, which will need more workers as the world’s population ages. “Robotics is about helping people,” he says.
In some roles, robots’ abilities can surpass those of the flesh and blood. Yet it’s also true that there are certain jobs only humans ever could or should do. The Stanford Robotics Center is one attempt to probe that boundary and find out just how many tasks of daily life—at home, at work, in medicine and even underwater—are best offloaded to metal and plastic assistants.
One skill in particular is a significant stumbling block for robots. “The biggest challenge in robotics is contact,” says Oussama Khatib, director of the center. Lots of robots have humanlike hands—but hands are more complex than they seem. Our articulated fingers belong to an appendage built of 27 bones and more than 30 muscles that work in concert. Our sense of touch is actually a synthesis of many senses, relying on cellular receptors that detect pressure and temperature and on proprioception, or our knowledge of our body’s location and motion. Touch and dexterity enable humans to outperform current robots at many tasks: although children often master tying their shoes between the ages of five and seven years, for instance, only machines designed specifically to tie shoelaces can do so at all. Many robots rely not on hands but on “jaw grippers” that bring two opposing fingers toward each other to hold an object in place.
Impressive demonstrations of robotic hands, such as when Tesla’s humanoid Optimus robot was recorded snatching a tennis ball out of the air in 2024, often rely on teleoperation, or remote control. Without a technician guiding Optimus off-screen, playing catch would be out of the question for the robot.
Stanford Robotics Center executive director Steve Cousins (left) and director Oussama Khatib (right) pose with some of the robots being developed in November 2025. Christie Hemm Klok
In the early 1960s the first industrial robot arm—a bulky, 3,000-pound machine—was installed in a General Motors plant in Trenton, N.J. Named Unimate, it was designed for “programmed article transfer,” as its patent describes. In practice, this meant the robot used its gripper to grab and lift hot metal casts from an assembly line. Unimate’s proprioception was crude. A handler had to physically move the arm to put it through any desired motion. It could carry out basic tasks, including hitting a golf ball and pouring a beverage from an open can—which a Unimate robot demonstrated for Johnny Carson on the Tonight Show in 1963.
Yet Carson gave the machine’s business end a wide berth. Maintaining a respectful distance from robot arms is, after all, a long-standing norm, part of the structured environments that have helped manufacturing robots succeed for the past 60 years. Moving them out of such orderly domains, as the roboticists at Stanford are trying to do, is hard. Khatib says he and his colleagues are “taking robots to a world that is uncertain—where you don’t know where you’re going exactly and where, when you touch things, you [might] break them.” He seeks inspiration from what he calls human “compliance,” or the way we adapt to our environment by touch and feel. Guided by these principles, he developed a pair of cooperative robot arms equipped with grippers, named Romeo and Juliet.
I spy Romeo in Khatib’s lab, powered down and alone; Juliet has recently been shipped back from a museum in Munich and is still boxed up. Khatib recalls becoming nervous when a wealthy computing pioneer visited the university in the 1990s and approached the arms because he “wanted to dance” with the robots. The visitor wasn’t hurt, luckily, but that wasn’t a guaranteed outcome. “This is our work: trying to discover human strategies,” he says, and then applying them to robots that must operate in a world that includes variables such as spontaneous dancers.
To help robots better feel their way through the world, Monroe Kennedy III, an assistant professor of mechanical engineering at Stanford, is developing a sensor called DenseTact. The device improves standard grippers by equipping them with a translucent silicone gel tip. When the tip presses against something, the object leaves an imprint in the gel. A camera in the sensor then detects light, produced by an LED embedded in the sensor, reflecting off the interior surface of the silicone. The robot uses the changing light intensity to make a mathematical representation of the surface of the object. In other words, DenseTact enables a robot to “see” what it’s touching. One of Kennedy’s robots can rub a sheet between two fingers and tell whether the fabric contains one, two or three layers of silk with greater than 98 percent accuracy.
Scientists at the Massachusetts Institute of Technology created a similar system named GelSight. Sandra Liu, as a doctoral student in mechanical engineering there, has shown that GelSight can identify by touch the tiny letters spelling out LEGO on the stud of a toy brick. In a departure from other designs for robot hands, which tend to emphasize fingers, Liu inserted a GelSight sensor into a rubber palm. Palms are underappreciated in robotics, Liu says. “When I grab something large, for example, I’m actually grasping more with my palm than I am with my fingers,” she explains. Liu and her thesis adviser tested robots with various finger-and-palm configurations by having them grasp plastic Fisher-Price toys slathered in paint. A robotic palm that was bendable and covered in compliant gel afforded the best grip on the toys, they found.
Although palms seem promising, Liu acknowledges that the optimal robotic hand might not need to mimic our own anatomy at all. “There’s a lot of philosophical debate about whether we’re so hung up on the idea of making humanlike robotic hands that we’ve lost sight of what’s actually important,” she says, “which is just a robotic hand that can do a bunch of different tasks.”
To descend into the Stanford Robotics Center is to enter what must be among the nicest basements of any university engineering schools. Bright artificial light beams from fake skylights in its white ceiling, which ripples as if to suggest waves. The rooms, partitioned by glass walls, are themed around domestic, recreational and workplace environments. There’s a kitchen where a robot has stir-fried shrimp and put away dishes. In a medical suite, a see-through replica head is threaded with tubes filled with red liquid à la human veins. The idea, Cousins says, is that tiny robots could be guided by magnets through the vasculature to, for instance, remove a blood clot. Outside in the hallway, a quadruped robot rests like a sleeping dog at the end of a sofa. “I think they’re teaching it to jump on the couch,” Cousins says.
There is also a dance studio, complete with a wood floor and large mirrors. Here scientists record the movements of human dancers to train virtual robots. “Robots move in the world,” Cousins says. “Who understands how to move in the world more innately than dancers and choreographers?”
The DenseTact Optical Tactile Sensor delicately grips a ripe strawberry. Christie Hemm Klok
Next door, in a bedroom styled with IKEA furniture, two roboticists are testing TidyBot. The one-armed machine uses a parallel jaw gripper to clean up the space. Cameras that ring the ceiling help it determine which object among those scattered around is nearest. Using its onboard camera, TidyBot categorizes each item as, say, a toy, a piece of clothing or a hat. Then it decides where that thing belongs; roboticists determined that TidyBot can put an object in its proper place about 85 percent of the time (better than my human children).
As I watch, the robot deposits a shirt in a laundry basket. Then it finds a hat, grabs it, wheels across the room to a bureau, places the hat on the ground, opens a drawer by gripping its handle, picks up the hat, sets it inside and closes the drawer. Next it turns around, spies a plastic banana, picks it up and sets it on a shelf. In other tests, TidyBot has, with varying levels of success, wiped down a countertop, loaded a dishwasher, closed a refrigerator and watered a plant.
If robots are to truly partner with humans, they will need to master skills that are more ambitious than tending to ficuses. I follow the Stanford Robotics Center’s ceiling ripples down a passage that leads to a large pool, still under construction, that will host the merperson-shaped robot OceanOne.
The 500-pound underwater machine has two arms and an anthropomorphic face, which Khatib says is designed to appear reassuring to human divers in murky water, and it tapers into a fishlike rear that sprouts omnidirectional thrusters. Its hands have rubbery fingers that give slightly when squeezed. It’s designed to venture deeper into the ocean than scuba divers typically dare. “It is the only [robot] in the world capable of reaching the seabed” and sensing it with haptic feedback, Khatib says.
OceanOne has already navigated the world’s deepest swimming pool, in Dubai, where Khatib used it to play chess against a diver. Near the coast of Corsica it explored a sunken Roman ship dating to the second century C.E. There Khatib, onboard a research ship at the surface, remotely piloted the robot’s soft fingers to pluck a delicate oil lamp from the ancient wreck. He and his colleagues are working on an upgraded version named OceanOneK, which will be able to dive to a depth of 1,000 meters (almost 3,300 feet).
When OceanOne is diving, its hands are controlled via a tether that links it to a control system and a pilot who wears 3D glasses to see the robot’s view. Outside his office, Khatib leads me to a set of similar controls. The apparatus is akin to a pair of parallel video-game joysticks but sleeker and with more degrees of freedom. I grab one in each hand. A scene appears on a computer screen in front of me, showing a ball atop a slab of what looks like gelatin. Khatib asks me to roll the ball across the gel. I move the controller forward, and the ball responds. What feels smooth and instantaneous to me requires hefty computing power. “This is really difficult because we are simulating in real time the deformable membrane, but at the same time you are touching it and feeling it physically,” he says. “Go to the middle and push hard—hard!” I follow his instructions, and the simulated membrane breaks as I drive the ball downward. The response through the haptic feedback is uncanny: it’s exactly how I imagine it would feel to press a billiard ball into a tray of Jell-O.
The OceanOne robot’s anthropomorphic face is designed to reassure human divers underwater. Christie Hemm Klok
Khatib’s dream is to put more controllers like this one, attached to more robots like OceanOne, in the hands of many other scientists. He would submerge those robots at various points on the ocean floor to create a submarine fleet scattered around the world. The program would operate similarly to space observatories, where experts from many institutions can visit to take measurements with specialized sensors and return home with their data. “Imagine what you can do,” he says, “for the coral reefs, for plastic, for the environment, for the sea.”
Charmed though I am by such visions, I have an admission: I have a hard time picturing myself using robots in daily life (as much as I’d love a cheerfully beeping R2-D2-style helper). Perhaps it’s because my only relationship with a household robot, an automated dinner-plate-size vacuum, ended in disaster. A well-meaning houseguest turned on the robot before she left, unaware of the prep required to robot-proof the apartment. The bot ran over my cat’s food bowl and partially ingested some salmon pâté; by the time I arrived at home it had smeared the rest in a brown slug trail across my rug and floors. I was already unsure whether the device was saving me any cleaning time, and after the cat food fiasco, I retired the robot to a closet and never turned it on again.
And there are still issues to work out in the machines at Stanford, as impressive as they are. In one test I observed, TidyBot was supposed to put away a yellow LEGO brick but failed to find it when it was obscured behind a bed. During another demonstration in the center’s kitchen, the dishwashing robot, which had previously been working normally, glitched. The reason, after some investigation, was that it got confused by the unusually large number of people watching it that day—its surroundings were so crowded that the machine could no longer detect where to place dishes. The robot is trained through machine learning, so Cousins says one solution might be to train it more frequently to perform with an audience.
Near the end of my tour, a few doors down from the kitchen, in the Field Robotics Bay, a staff roboticist launches a small, cylindrical drone named the Firefly. It lifts off vertically with a sound like a hair dryer set to max. Unusually for a drone, it has only one spinning blade and relies on self-stabilizing systems to remain oriented upright. Cousins pokes the monocopter in the side, and the flying robot wavers and automatically rights itself. His next nudge, though, is a touch too hard. The drone tilts sideways, then shoots off and crunches into a wall.
Cousins pauses. “It should probably turn itself off if it goes horizontal,” he says. The staff roboticist who’s been operating the drone appears unfazed as he picks up the scattered pieces of plastic; such are the benefits of housing an experimental robot in a replaceable 3D-printed shell. The crash, though minor, is a reminder of two central truths: robotics is complicated, and, to robots, people are complications. We’ll have to wait to see whether humans are a problem that can ever be solved.Author: Clara Moskowitz. Ben Guarino. Source