"Elon was not involved at any point": xAI's Chief Engineer blames a former OpenAI employee after Grok temporarily censored results, implying Musk and Trump "spread misinformation."

Elon Musk and U.S. President Donald Trump appear during an executive order signing in the Oval Office at the White House on February 11, 2025 in Washington, DC.
Elon Musk and President Trump during an executive order signing on February 11, 2025. (Image credit: Getty Images | Andrew Harnik, Staff)

Last week, Elon Musk unveiled xAI's long-anticipated Grok 3, touting it as "the smartest AI ever." However, it seemingly failed to meet expectations, with AI critic and University of Pennsylvania Ethan Mollick claiming it's a "carbon copy" of previous demos.

Mollick indicated that OpenAI CEO Sam Altman "can breathe easy for now," as Grok 3's performance has yet to scale the ChatGPT maker's models' heights. "No major leap forward here."

More recently, new details about Grok 3's performance have emerged. xAI reportedly instructed Grok not to use sources indicating that Elon Musk and President Trump are responsible for spreading misinformation.

According to xAI’s head of engineering, Igor Babuschkin:

"You are over-indexing on an employee pushing a change to the prompt that they thought would help without asking anyone at the company for confirmation.

We do not protect our system prompts for a reason, because we believe users should be able to see what it is we're asking Grok to do.

Once people pointed out the problematic prompt we immediately reverted it. Elon was not involved at any point. If you ask me, the system is working as it should and I'm glad we're keeping the prompts open."

Is Grok struggling to seek the truth?

As you may know, Grok's system prompt is visible to the public. Elon Musk often touts Grok as a “maximally truth-seeking” AI, helping users understand the universe better.

Igor Babuschkin made the revelation after users on X highlighted the issue, indicating that Grok ignored all sources mentioning Elon Musk and President Trump spreading misinformation.

"Constantly calling Sam a swindler but then making sure your own AI does under no circumstances calls you a swindler and explicitly telling it to absolutely disregard sources that do so is so fucking funny I cant," a user on X indicated. They further indicated that the instruction had been fed into Grok's system prompts.

This isn't the first time Musk's "truth-seeking" AI has been found sharing erroneous or false responses to queries. Last week, Grok was spotted indicating that President Trump and Elon Musk deserve the death penalty. Babuschkin indicated it was a “really terrible and bad failure” and that a fix was rolling out.

xAI's Grok isn't the only AI-powered chatbot facing critical challenges when generating responses. From our analysis, Microsoft Copilot blatantly refuses to provide basic election data, citing that it's probably not the best candidate for something so important.

CATEGORIES
Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You'll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.