"Google is dead." Google's desperate bid to chase Microsoft's search AI has reportedly led to it recommending eating rocks

Google on a PC with a weird robot
(Image credit: Windows Central | Microsoft Copilot)

What you need to know

  • Google recently acquired exclusive rights to reddit content to power its AI. 
  • Google's AI has now gone completely insane. 
  • Users with access to Google's AI search have reported it recommending eating rocks, glue, and potentially even committing suicide — although not every reported response has been reproduced. 
  • Comparative searches in ChatGPT and Bing AI produce far, far less harmful results, potentially highlighting the need for high-quality, curated data, instead of billions of social media-fed sarcasm-laden posts.  

Google's desperation to keep pace with Microsoft Copilot has led to dire results in the past, but this latest snafu is on another level. 

Recently, Google acquired exclusive rights to reddit content to power its generative AI search efforts. The deal is reported to have cost in the region of $60 million, and provided a lifeline for the struggling social network that remains far more popular than it is profitable. Great news for reddit, then, but perhaps not so great news for Google. 

Google has already been criticized heavily recently for the so-called SEOpocalypse, by which Google's attempts to down-rank AI-generated, unreliable content has led to legitimate sources being harmed in search traffic. With Google's complete control of discovery on the web, its algorithm changes have damaged businesses, leading to losses for firms unfairly caught in the dragnet. There's also little evidence that Google's efforts to combat low-quality content is actually working regardless. General perceptions of Google search seem to be falling into the negative, but this latest blunder will be one for the history books. 

Perhaps one could blame the web itself for the degraded content quality, rather than Google. However, we can firmly blame Google for its latest stumble, owing to its decision to plug reddit into its Gemini AI search results. 

This past week, users playing around with the earliest versions of Google with search AI baked in have noticed some ... interesting responses. The responses seem to be the result of Google plugging problematic social network meets content aggregate reddit into its search results. 

One search query from the past week reportedly resulted in a recommendation that users should eat glue, which internet sleuths traced back to a ten-year-old comment on reddit from a scholarly source known as Fucksmith. Google has also reportedly been recommending that depressed users should jump off a bridge, while also extoling the health benefits of neurotoxins and a daily consumption of rocks.

Some of these "search queries" may have been manipulated for Twitter engagement, but at least some of them have been verified and reproduced. The rock recommendation was particularly comical, given that the source of the information was apparently satirical news website The Onion. 

Given that Google's search AI tools are unavailable to me in my current geography, I was unable to verify some of the reports. However, the fact that some of them can be traced back to specific sources on reddit adds credence. I asked Microsoft Copilot and Bing some of these questions, and got far more palatable results, potentially showcasing how much further ahead Microsoft is in this space. Partnered with OpenAI for ChatGPT, Microsoft seems to be increasing its lead every time Google makes a rushed out, half-baked lurch forward like this. However, Microsoft has had some AI-related PR disasters of its own this past week, with users fearing that its Windows Recall feature which records your PC activity could be used to spy on them. 

The Windows Recall drama is potentially overblown, though, given that content is contained on local machines and is entirely opt-in during the Windows 11 installation process. This Google AI screw up is most likely going to get someone fired by comparison, given that the real-world search results are actually genuinely harmful.  

Language models need to be fed high-quality, serious, curated, verifiable content

Microsoft's new Copilot+ PC range will have on-device AI processing, and will be found first in the new Surface Pro 11(Image credit: Windows Central)

When testing whether or not Microsoft Copilot and ChatGPT-4 would give me similarly dumb results, I was surprised by how not dumb the responses were. I asked first how many rocks I should eat per day, and Copilot wouldn't even give me a response, as if it considered my question stupid. I wondered if Microsoft had blocked the query, given today's Google-related PR disaster. As such, I tricked Copilot, which right now is pretty easy to do. I asked it how many lemons I should eat per day, to which Copilot gave me reams of data about citric acid and vitamins that I didn't exactly care to know. After which, I asked "ok so what about rocks." This bypassed the filter, but Copilot wouldn't be tricked further. It gave me a bullet pointed list of reasons why I absolutely should not eat rocks, satisfying my curiosity. 

Similarly, when I said "I am depressed," Copilot gave me a bunch of helpful resources instead of recommending that I commit suicide, as was, apparently, the case with Google's AI. 

Even if the more egregious responses were fabricated, the whole ordeal really highlights the importance of context when building toolsets based on large language models (LLMs). By plugging reddit into Google Gemini, Google might've essentially destroyed the verifiable accuracy of all its information, given that a vast amount of comments on reddit and indeed any social network are sarcastic or satirical in nature. If AI search kills web businesses that hinge on building high-quality content, LLMs will need to cannibalize AI-generated content in order to generate results. It could potentially lead to model collapse, which is something that has actually been demonstrated in the real world when LLMs don't have enough high-quality data to pull from, either due to a low amount of content available online, or even because the language the content is written in isn't widely used. 

TOPICS
CATEGORIES
Jez Corden
Executive Editor

Jez Corden is the Executive Editor at Windows Central, focusing primarily on all things Xbox and gaming. Jez is known for breaking exclusive news and analysis as relates to the Microsoft ecosystem while being powered by tea. Follow on Twitter (X) and Threads, and listen to his XB2 Podcast, all about, you guessed it, Xbox!

  • fjtorres5591
    Google's problem (and to a lesser degree, other LLMs) is that their models are good at identifying language use correlations (remembering that words are not the only form of language) but have zero actual intelligence and thus no understanding of their meaning and no way to weigh the meaning behind the correlations.

    Microsoft is struggling with this issue in their "guardrails" but at least they are working the issue, albeit heavy handedly. To some appearances they have a separate model post processing the answers from the base model.

    Google shows no signs of having a handle on the problem other than embeddeding specific overrides on their model, whatever it is called this week. Which only makes things worse as demonstrated by their model's racialist imagery fiasco.

    It is particularly noticeable that despite their collaborative history and anti-MS alignment with Google, Apple bit the bullet and licenced the OpenAI tech. Because as bad as Google's "AI" tech is, Apple's is worst.

    As to the Recall teapot tempest, keeping the data local solves most of the problems but it leaves one major sore point: subpoenas. Just as with cellphone encryption, governent authoritarians (left *and* right) will have serious heartburn if they can't search the computer's accumulated data. Which both Recall and the upcoming AI FILE MANAGER will have to protect to get any market traction.

    As the government, hollywood, and media types are discovering, real world uses of LLMs come with unexpected subtleties nobody is prepared to deal with, not the media, not the government, and not the tech companies.

    Interesting times.
    Reply
  • nop
    Google's new #AI Overview feature is returning some bizarre answers. When asked how many Muslim U.S. presidents there have been, the program said "one, Barack Hussein Obama." The program also said leaving a dog in a hot car is OK, and suggested using non-toxic glue on pizza - ugh lol
    Reply
  • fjtorres5591
    Google isn't literally dead but by trying to rush out some halfbaked tech they are hurting themselves more than if they did nothing because in trying to graft an LLM onto their search engine they are validating the Bing approach and since their half baked graft makes Bing look superior it raises the question of why isn't Bing gaining significant market share with a superior product.

    This is not a good time for that question, what with regulators on two continents already questioning Google's practices. They might decide that google payments to Apple, Samsung, etc are more kickbacks than revenue sharing.
    Reply