I Tested a Next-Gen AI Assistant. It Will Blow You Away

WIRED experimented with a new form of voice assistant that can browse the web and perform tasks online. Siri, Alexa, and other virtual helpers could soon be much more powerful.
Illustration of a mouth talking search bar sound waves shopping cart credit card and feet up
Illustration: Jacqui VanLiew; Getty Images

The most famous virtual valets around today—Siri, Alexa, and Google Assistant—are a lot less impressive than the latest AI-powered chatbots like ChatGPT or Google Bard. When the fruits of the recent generative AI boom get properly integrated into those legacy assistant bots, they will surely get much more interesting.

To get a preview of what’s next, I took an experimental AI voice helper called vimGPT for a test run. When I asked it to “subscribe to WIRED,” it got to work with impressive skill, finding the correct web page and accessing the online form. If it had access to my credit card details I’m pretty sure it would have nailed it.

Although hardly an intelligence test for a human, buying something online on the open web is a lot more complicated and challenging than the tasks that Siri, Alexa, or the Google Assistant typically handle. (Setting reminders and getting sports results are so 2010.) It requires making sense of the request, accessing the web to find the correct site, then correctly interacting with the relevant page or forms. My helper correctly navigated to WIRED’s subscription page and even found the form there—presumably impressed by the prospect of receiving all WIRED’s entertaining and insightful journalism for only $1 a month—but fell at the final hurdle because it lacked a credit card. VimGPT makes use of Google’s open source browser Chromium that doesn’t store user information. My other experiments showed that the agent is, however, very adept at searching for funny cat videos or finding cheap flights.

VimGPT is an experimental open-source program built by Ishan Shah, a lone developer, not a product in development, but you can bet that Apple, Google, and others are doing similar experiments with a view to upgrading Siri and other assistants. VimGPT is built on GPT-4V, the multimodal version of OpenAI’s famous language model. By analyzing a request it can determine what to click on or type more reliably than text-only software can, which has to attempt to make sense of the web by untangling messy HTML. “A year from now, I would expect the experience of using a computer to look very different,” says Shah, who says he built vimGPT in only a few days. “Most apps will require less clicking and more chatting, with agents becoming an integral part of browsing the web.”

Shah is not the only person who believes that the next logical step after chatbots like ChatGPT is agents that use computers and roam the Web. Ruslan Salakhutdinov, a professor at Carnegie Mellon University who was Apple’s director of AI research from 2016 to 2020, believes that Siri and other assistants are in line for an almighty AI upgrade. “The next evolution is going to be agents that can get useful tasks done,” Salakhutdinov says. Hooking Siri up to AI like that powering ChatGPT would be useful, he says, “but it will be so much more impactful if I ask Siri to do stuff, and it just goes and solves my problems for me.”

Salakhutdinov and his students have developed several simulated environments designed for testing and honing the skills of AI helpers that can get things done. They include a dummy ecommerce website, a mocked-up version of a Reddit-like message board, and a website of classified ads. This virtual testing ground for putting agents through their paces is called VisualWebArena.

Tales from this testing ground suggest that AI agents will be able to do impressive things in the near future that will make digital life much easier. A model can, for instance, look at a photo of someone wearing a sweater, then hunt through ecommerce listings for similar garments below a certain price and add the cheapest to a person’s shopping cart. In another example, an agent informed that a person no longer wants to see posts from a particular user on a Reddit-like site can work out how to navigate the site’s settings to hide posts from the offending individual.

The catch is that there are also plenty of bloopers. In their experiments, the CMU team found that their AI agents could achieve a complex objective about 16 percent of the time—but that humans did so 88 percent of the time. Failures are often mundane, like failing to navigate a website and getting caught in an infinite browsing loop. But they might sometimes look like misbehavior, for example if an agent were to accidentally add dozens of items to a user’s cart or incorrectly friend an annoying user on a social site. Perhaps it’s a good thing that I can’t yet give vimGPT my payment information.

One reason the CMU environments are valuable is that AI agents can run amok inside them without causing any real damage. Gathering data on such incidents helps researchers understand how well they are able to perform a given task, and how exactly they go wrong. Salakhutdinov says setting agents loose inside environments like VisualWebArena also makes it possible to have the agents actively learn from successes and failures, in the same way that simulations can provide training for game-playing machine learning algorithms, honing them into champion-defeating superstars like Alphabet’s AlphaGo.

Salakhutdinov says he has no inside knowledge of what Apple is up to right now but expects them to be busy building agents. “All the big tech companies—Apple, Microsoft, Google—have divisions basically working in that space,” he says.