Elon Musk's Grok AI spreads election misinformation despite claims of being "the world's most powerful AI" and secret access to X data for training

Elon Musk
Elon Musk (Image credit: iamnot_elon on X)

What you need to know

  • Grok is reportedly spreading false information about the forthcoming elections.
  • X reportedly shoulder-shrugged the critical issue after it was flagged by five state secretaries and had already reached millions across social media platforms.
  • Minnesota Secretary urges voters to reach out to their state or local election officials to find out more about the election and voting process. 

As we forge toward the 2024 presidential election, voters must be furnished with credible information. However, this isn't the case. With the prevalence of generative AI, bad actors are leveraging the technology's capabilities to spread misinformation about the election, ultimately impacting the voter's decision-making process. 

In a letter addressed to X owner Elon Musk, five state secretaries urged the billionaire to fix the social media platform's dedicated AI chatbot, Grok, after it was spotted spreading misinformation about the forthcoming elections (via Axios). Musk claims it will be "the world's most powerful AI by every metric by December this year." The tool is currently being trained using the world's most powerful AI training cluster to enhance its capabilities.

Grok generated and spread "false information on ballot deadlines" following President Biden's withdrawal from the race to The White House. It indicated that Vice President Kamala Harris had missed the ballot deadline across nine states, including Alabama, Michigan, Minnesota, Indiana, New Mexico, Ohio, Texas, Pennsylvania, and Washington.

For context, the AI chatbot's access is limited to X's Premium and Premium+ subscribers. Ironically, the false information was widely shared across multiple social media platforms, reaching millions. The letter indicates that the chatbot continued to spread the false information for 10 days before it was corrected, with X's response to the critical issue being a mere "shoulder shrug."

In a statement, Minnesota Secretary of State Steve Simon indicated:

"In this presidential election year, it is critically important that voters get accurate information on how to exercise their right to vote. Voters should reach out to their state or local election officials to find out how, when, and where they can vote."

As AI tools become more sophisticated and advanced, it becomes more difficult to determine what's real.  

What is Microsoft and OpenAI doing to prevent the spread of misinformation sponsored by AI?

Microsoft and OpenAI logos (Image credit: Microsoft, OpenAI | MIcrosoft Image Creator)

In case you missed it, Elon Musk filed a new lawsuit against Sam Altman and OpenAI for a stark betrayal of the firm's founding mission and alleged involvement in racketeering. Musk claims he was lured to invest in OpenAI using a "fake humanitarian mission."

Elsewhere, OpenAI has put elaborate measures in place to help users identify deepfakes and AI-generated content. Right off the bat, the ChatGPT maker plans to use tamper-resistant watermarking to make AI-generated content easily identifiable.

RELATED: Elon Musk secretly trains Grok with your X data by default

Earlier this year, Microsoft CEO Satya Nadella indicated there's 'enough technology' to protect the US presidential election from AI deepfakes and misinformation, including watermarking, detecting deepfakes, and content IDs. 

Microsoft Copilot has previously been spotted generating false information about elections with researchers claiming the issue is systemic. However, this doesn't wash down Microsoft's efforts to empower voters with authoritative and factual election news on Bing ahead of the poll.

Microsoft President Brad Smith recently launched a new website featuring a quiz game dubbed Real or Not. It helps enhance the user's AI-detection skills. 

🔥The hottest trending deals🔥

Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You'll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.

  • John McIlhinney
    Muskrat said that his PAC, setup to support Daniel Tramp, is not supposed to be hyper partisan. Apparently, the same does not go for his AI. Apparently, the ability to spread lies is one of the metrics by which the power of an AI should be judged and Gag is on course to be the most powerful. Muskrat informed us that lying on Xitter doesn't work any more when Kamala Harris didn't lie in a Xit, but I guess this was an ALie so it doesn't count.
    Reply