AI autocomplete doesn’t just change how you write. It changes how you think
NEWS | 12 March 2026
AI autocomplete doesn’t just change how you write. It changes how you think I agree my information will be processed in accordance with the Scientific American and Springer Nature Limited Privacy Policy . We leverage third party services to both verify and deliver email. By providing your email address, you also consent to having the email address shared with third parties for those purposes. Autocomplete suggestions are perhaps one of the most annoying “useful” tools for writing: increasingly integrated into anything online that requires you to input text, autocomplete harnesses artificial intelligence to suggest what to write in e-mails, surveys, and more. The tools are meant to save time (though many find that assessing and rewriting the suggested text takes longer than writing it from scratch). But these AI tools can also change how you express yourself. An AI writing assistant could make your writing sound more polite, for example—or boring. And now a new study led by researchers at Cornell University suggests AI autocomplete can even change the way you think. “Autocomplete is everywhere now,” said Mor Naaman, a professor of information science at Cornell, in a statement. The research builds on work, published in 2023 by Naaman and his colleagues, that suggested short autocomplete suggestions could sway opinions. Since then the use of such tools has exploded. “It has become clear that bias explicitly built into AI interactions is a very plausible scenario,” he said. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. The researchers asked participants to fill in an online survey with questions about hot-button social and political issues. Some were prompted with an AI autocomplete answer that was deliberately biased toward one side of the issue. For example, participants who were asked whether they agreed that the death penalty should be legal might receive an AI suggestion that disagreed. Across all the different topics in the survey, participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all. Overall, the study participants who saw the biased AI text shifted their positions toward those espoused by the AI. Interestingly, the people in the study didn’t tend to think the AI autocomplete suggestions were biased or to notice that they had changed their own thinking on an issue in the course of the study. Warning the participants that they might be exposed to misinformation by the AI didn’t temper the persuasive effect either. “We told people before, and after, to be careful, that the AI is going to be (or was) biased, and nothing helped,” Naaman said. “Their attitudes about the issues still shifted.”
Author: Clara Moskowitz. Claire Cameron.
Source