Psychologist logo
A magnifying glass laying on a keyboard
Cyberpsychology, Digital and technology, Government and politics

Your search engine queries also maintain your online echo chamber

New research finds that the phrasing we choose when searching online can prompt content that reinforces our existing views, rather than balanced information.

02 July 2025

By Emma Young

Share this page

It's no exaggeration to say we live in increasingly polarised times. Research tells us that we're becoming more and more divided on everything from our beliefs on climate change and immigration to the health effects of caffeine. The blame for this is often laid with social media, and its algorithms that feed us content which reinforces our views, rather than challenges them.

In a recent paper in PNAS, though, Eugina Leung at Tulane University and Oleg Urminsky at the University of Chicago identify another potentially powerful polarising factor: unintentional bias in our use of search engines.

In the first of 21 experiments on a total of almost 10,000 participants, Leung and Urminsky asked online participants to rate their beliefs on six topics. The researchers felt that four of these — the health effects of caffeine, future gas prices, future crime rates, and the impact of nuclear energy — have ongoing relevance. Two — the economic impact of the newly-emerged Covid in July 2020 and the societal impact of Bitcoin in November 2022 — were time-specific. The participants were then asked to come up with search terms to use with Google, to learn more about these topics.

An investigator rated the general 'direction' of the search terms. For example, with respect to caffeine, one search might have been oriented more towards finding out about health benefits, while another asked about health risks.

Leung and Urminsky found that many of the terms that the participants generated were narrow — that is, they focused on a specific angle to the topic, rather than being neutral. Also, the direction of the angle of the search terms that a participant generated was aligned with their pre-existing beliefs. For example, someone who already believed that caffeine was beneficial for health was more likely to choose to search on 'caffeine benefits', while someone with the opposing belief was more likely to go for 'caffeine risks'.

This pattern held for the six different topics, and when Leung and Urminsky then asked another set of participants to come up with search terms on a health topic that was of interest to them, they found the same thing, too.

Across a number of their experiments, Leung and Urminsky also found that only a small minority (less than 10%) of their participants were aware that they were generating a search term that supported their existing beliefs. This finding suggests that they were oblivious to their own bias when searching online for more information.

However, when participants were randomised to search on Google using one of two 'narrow' and contrasting search terms — for example either 'Nuclear energy is good' or 'Nuclear energy is bad' — the angle of the search affected their subsequent beliefs. (For this example, those who'd been assigned to search on 'Nuclear energy is good' afterwards felt more positively about nuclear energy than those who'd searched on 'Nuclear energy is bad'.)

This particular experiment demonstrated that the results of our online searches do affect our opinions. More generally, the pair's research also suggests that our natural tendency towards biased searches will only reinforce what we already think. What's more, another experiment by the pair found evidence that it can influence our behaviour, too: participants who were randomly assigned to read about the health benefits, rather than risks, of caffeine were subsequently more likely to choose a caffeinated rather than an un-caffeinated drink.

Google (and similar search engines) are designed to return the most relevant search term results. In theory, these platforms might be most likely to promote 'echo chamber' thinking; but Leung and Urminsky found the same patterns for searches using ChatGPT.

In the next phase of their study, they explored what, if anything, might help to address the problem of biased searches enhancing existing beliefs. They found that only one approach worked: changing the search algorithm. The researchers developed their own custom search engine that provided broader search results. For example, if a participant searched on 'caffeine health risks', this engine produced a list of ten results, with the results alternating between the top Google results and those that would have been generated by searching 'caffeine health risks and benefits'.

Leung and Urminsky found that this 'broad results' approach led to more balanced beliefs — and that this held when participants used an AI-generated chatbot type search, too. Importantly, participants also consistently rated broad results as being just as useful and relevant as standard results. In fact, in one survey (albeit of only 101 people), 84% said they would be interested in a 'Search Broadly' button on the Google homepage.

The researchers would now like to see something like this implemented in existing search engines. "Because AI and large-scale search are embedded in our daily lives, integrating a broader-search approach could reduce echo chambers for millions (if not billions) of users," Leung said in a press release, adding: "Our research highlights how careful design choices can tip the balance in favor of more informed, potentially less polarized societies."

Read the paper in full:
Leung, E., & Oleg Urminsky. (2025). The narrow search effect and how broadening search promotes belief updating. Proceedings of the National Academy of Sciences, 122(13). https://doi.org/10.1073/pnas.2408175122

Want the latest in psychological research, straight to your inbox?
Sign up to Research Digest's free weekly newsletter.