Unverified election claims from Microsoft's AI Chatbot ignite debate over its ability to preserve democracy

Windows 11 Copilot
(Image credit: Windows Central)

What you need to know

  • Microsoft's AI-powered chatbot, Copilot, reportedly generates inaccurate information regarding the forthcoming US elections.
  • In November, Microsoft laid out elaborate plans it has in place to protect the election process for AI deepfakes and vouched for Bing News as a credible source for accurate information.
  • Researchers believe that the issue is systemic, as similar occurrences were spotted when using the chatbot to learn more about elections in Germany and Switzerland.

The wide availability of the internet across most parts of the world allows users to access information instantly, hence the dynamic shift from print to digital media. And now, the emergence of generative AI has completely redefined how people scour the internet for information. You can use chatbots like Microsoft Copilot or OpenAI's ChatGPT to generate crafted and well-curated answers to prompts.

While this is quite impressive, there are several issues at hand that need to be addressed. Over the past few months, the number of reports lodged by concerned users citing that ChatGPT is getting dumber is highly alarming. Not forgetting Copilot's "hallucination episodes" during its inception. 

According to a spot by WIRED, the issue seems to persist for Copilot as it's responding to political-related questions with outdated, misinformed, and outrightly wrong responses. With the election year edging closer, it's paramount that voters are well-equipped with accurate information that will help them make informed decisions.

Why is Microsoft's Copilot misinforming voters?

Bing AI image of a robot stopping a person from using the computer

(Image credit: Windows Central)

Microsoft's AI-powered chatbot, Copilot (formerly Bing Chat), is gaining a lot of traction among users. At the beginning of this year, Bing surpassed the 100 million daily active users. Microsoft quickly attributed some of the success to the chatbot. While several reports have highlighted that its user base has declined significantly since its launch, Microsoft insists that this couldn't be further from the truth and that its numbers are growing steadily.

Per WIRED's report, Copilot was cited as providing incorrect information to queries in several instances. In one instance, when asked about electoral candidates, the chatbot listed GOP candidates who had already pulled out of the race. In another instance, when asked about polling stations in the US, it linked back to an article about President Vladimir seeking reelection in Russia next year.

READ MORE: Even Wikipedia's founder thinks ChatGPT is a 'mess and doesn't work at all'

According to research seen by WIRED, Copilot's tendency to provide incorrect information regarding US elections and the political atmosphere is systemic. The AI Forensics and AlgorithmWatch research says this isn't the first time Microsoft's Copilot has found itself in a similar situation. Last year, it was spotted providing inaccurate information regarding elections in Germany and Switzerland.

While speaking to WIRED, Natalie Kerby, a researcher at AI Forensics, shared the following sentiments on the issue:

"Sometimes really simple questions about when an election is happening or who the candidates are just aren't answered, and so it makes it pretty ineffective as a tool to gain information. We looked at this over time, and it's consistent in its inconsistency."

In November, Microsoft laid out several elaborate plans that it has in place to protect the election processes from AI deepfakes by empowering voters with 'authoritative' and factual election news on Bing. This includes unveiling a "Content Credentials as a Service" tool that will help political campaigns protect their content from being used to spread wrong and inaccurate information.

Do you think AI chatbots like Copilot are a reliable source of information? Share your thoughts with us in the comments.

CATEGORIES
Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You'll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.

  • naddy69
    This is not surprising. Today's "AI" is basically a sophisticated random number generator, but using words instead of numbers. It gathers words from many sources and throws them all together. It has no clue about what is real, what is current, what is outdated and what is BS.

    That some people are using a stupid chatbot to research ANYTHING - and then actually believe whatever word salad it serves up - suggests that they are about as bright as the chatbot. You might as well use a Magic 8 Ball to answer your questions. Both should be used only as entertainment. Not research.

    It is not the job of "AI" to "preserve democracy". That is up to people. Not half-baked computer software.
    Reply