How LabOS AI-powered smart goggles could reduce human error in scienceNEWS | 27 February 2026Imagine standing at the laboratory bench, working on an experiment, when, as you finish one step, a display on the inside of your lab goggles tells you what to do next. A small camera in the frame watches your hands closely. If you reach for the wrong tube, the display flashes a warning. Before you can make the mistake, the system tells you how to get back on track.
Laboratory safety goggles have finally joined the ranks of smart devices. That’s the promise behind LabOS, an AI “operating system” for scientific laboratories built by the Stanford-Princeton AI Coscientist Team, a group led by Stanford University bioengineer Le Cong and Princeton University computer scientist Mengdi Wang, with founding partners that include NVIDIA. Powered by NVIDIA’s vision-language models to process visual data, the system is designed to provide AI with real-time knowledge of lab work so it can determine what causes experiments to fail or succeed and rapidly train new scientists to expert levels by guiding them through experimental protocols.
Walk into a wet lab, Cong says, and “it hasn’t changed much in the last 50 years.” This matters, he explains, because a large portion of the time, science is done “in the physical lab, in the physical world, not on computers.” As described in a recent preprint paper, LabOS aims to bridge this physical-digital divide.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
The scientific community has long grappled with a problem that has been known for more than a decade as a “replication crisis.” In a 2016 Nature survey, Monya Baker, then an editor for the journal, reported that “more than 70% of researchers have tried and failed to reproduce another scientist’s experiments,” and more than half couldn’t reproduce their own work. Some of that failure rate is attributable to statistical malpractice or publication pressure. But one common cause receives less attention: humans doing repetitive lab work make mistakes. A reagent added at the wrong temperature, a step skipped under time pressure, a contaminated pipette tip—these are errors that can be too small to notice but are large enough to wreck an experiment.
Cong Group, Stanford University
The solution proposed by Wang and Cong’s team is an open-source platform and hardware kit that lets AI see what scientists see. Researchers in early pilot tests in Cong’s lab at Stanford and Wang’s at Princeton wear augmented reality/extended reality (AR/XR) glasses that stream video directly to the system. LabOS compares what it sees against the written protocol, offering guidance to the wearer while also gathering training data. The AI can talk the scientist through each step, reminding them to keep a surface sterile or flagging lapses in technique.
AI needs real-time knowledge of experiments to learn what works and what doesn’t, much in the same way that robots and self-driving cars have to gather real-world data to update their systems. “We can have 1,000 chatbots, 1,000 AI scientists trying to tell real scientists what to do,” Wang says, but if AI isn’t wired into the physical experiment, “we never have anything verifiable.”
Normally when humans do lab work, learning can be slow. If an experiment fails, they try to determine what went wrong and begin again. But when AI watches an experiment and sees the outcome, it may be able to more rapidly determine which steps caused problems and can design a new experiment. By recording entire experiments, an AI can study the smallest details to determine what caused them to fail.
This oversight extends beyond human guidance; LabOS also utilizes a robotic arm to handle tedious tasks such as mixing. “It’s not like replacing people,” Cong says. “We need to help people.”
So far, the assistance is yielding results. In an experimental procedure that involved increasing the amount of a certain protein in cells, junior scientists with just one week of LabOS training obtained results that were virtually indistinguishable from those of expert scientists. “I couldn’t tell the difference as a professor,” Cong says. “The results from the experiment—they’re identical.”
“From a robotics and human-computer interaction perspective, this work highlights a promising direction,” says Kourosh Darvish, a scientist at the AI and Automation Lab at the University of Toronto’s Acceleration Consortium, who was not involved in LabOS development. Yet he notes the importance of developing standards to better evaluate such work. “As AI systems increasingly move from analytical tools toward active partners in experimentation, community-level standardization and validation will be critical.”
The AI Coscientist Team is already pushing this technology beyond the research bench. Recently the researchers introduced MedOS, adapting their AI-and-AR architecture to assist surgeons with anatomical mapping and tool alignment. Ultimately, Wang says, the broader ambition is to turn “every scientific research lab”—and soon, every clinic—“into an AI-perceivable and AI-operable environment,” creating a system that can train professionals faster, catch mistakes and improve human outcomes.Author: Eric Sullivan. Deni Ellis Béchard. Source