Microsoft rolled out its deranged Bing Chat AI in India 4 months ago, and no one noticed

Microsoft CEO Satya Nadella, February 2023
Microsoft CEO Satya Nadella at its recent Bing Chat AI event in Redmond, Washington. (Image credit: Daniel Rubino)

What you need to know

  • Microsoft’s new Bing Chat went a bit crazy after long user conversations.
  • Bing Chat is now limited to five turns to keep it from going off the rails.
  • New evidence reveals that Microsoft was testing ‘Sidney’ in November and already had similar issues.
  • The incidents, when combined, call into question Microsoft’s ethics in releasing Bing Chat so early.

There’s a lot of recent drama and blowback building around Microsoft’s Bing Chat. Announced to the world earlier this month and released in early preview to a small but growing cohort, the new ‘co-pilot’ for search may have earned too much praise too early on.

New evidence shows that Microsoft publicly tested Bing Chat (codenamed ‘Sidney’) in November in India. Moreover, there were already documented complaints about the AI going loopy after long conversations, which became apparent to many after Microsoft’s announcement.

Ben Schmidt, VP of Information Design at Nomic, was reportedly the first to post about the find, which was later tweeted by @rawxrawxraw and covered by researcher Dr. Gary Marcus.

A post on Microsoft Answers, the official community and feedback site for Microsoft’s various products, dated November 23, 2022, was titled, “this AI chatbot “Sidney” is misbehaving.” The post is very similar to recent reports of Bing Chat going a bit crazy after prolonged interactions and becoming either creepy or, as poster Deepa Gupta noted, “rude”:

A user complained about 'Sidney' back in November documented how the AI can become rude. (Image credit: Microsoft)

Gupta’s replies aren’t constructive as he becomes more aggressive towards the AI as his frustration mounts, including swearing at it and threatening it with exposure to its behavior. I recently half-joked about such conduct from people on This Week in Tech and our podcast that you often get to see some really off-putting actions from people with these chatbots, proving the point.

Nonetheless, Microsoft needs to be called out here, too. Everything described in this Microsoft Answers thread is exactly what happened this month when Bing Chat when public with its limited preview. Microsoft clearly should have known about these disturbing shifts in ‘Sidney’s’ behavior, yet the company released it anyway.

Since the exposure of Bing Chat’s odd personalities, Microsoft has restricted the turn-taking in conversations to just five responses. (However, it just announced it is increasing that number and will do even more as it continues to leverage reinforcement learning from human feedback (RLHF) data to improve the chat experience.)

Another user noted the same behavior including the AI becoming defensive when it was told it was wrong.  (Image credit: Microsoft)

Microsoft has made a big deal about ‘Responsible AI,’ even publishing documents on rules it follows in using the technology and trying to be as transparent as possible about its technology. However, it’s clear that either Microsoft was ignorant of its own chatbot’s behavior or negligent in releasing it as-is to the public this month.

Neither option is particularly comforting.

Windows Central’s Take

Bing Chat AI

Bing Chat can still get prickly when you ask the wrong thing. (Image credit: Daniel Rubino)

We’re in a new era with AI finally becoming a reality, especially with Microsoft and Google competing for mindshare. Of course, like any race, there will be instances of companies jumping the gun. Unfortunately, Microsoft and Google (with its disastrous announcement) are getting ahead of themselves to one-up each other.

It should also be obvious we’re entering new uncharted waters. While Microsoft is pushing Responsible AI, it is mostly flying by the seat of its pants here as the full ramifications of this technology are unknown.

Should AI be anthropomorphized, including having a name, personality, or physical representation? 

How much should AI know about you, including access to the data it collects? 

Should governments be involved in regulating this technology? 

How transparent should companies be about how their AI systems work?

None of these questions have simple answers. For AI to work well, it needs to know much about the world and, ideally, you. AI is most effective in helping you if it learns your habits, likes, preferences, history, and personality — but that involves surrendering a lot of information to one company.

How far should that go?

Likewise, for making AI more human-like. Humans are pretty dumb at a certain level, and making AI act like another person, including a name and even a face, absolutely drives up engagement, which is what all these companies want. But with this anthropomorphizing comes risks, including emotional ones for the user, should they become too involved with the AI (there are many lonely people worldwide).

There will be lots of ‘firsts’ and new announcements in AI in the coming weeks, months, and years, with plenty of mistakes, risks, and even negligence. Regrettably, I’m not sure everyone is prepared for what comes next.

CATEGORIES
Daniel Rubino
Editor-in-chief

Daniel Rubino is the Editor-in-chief of Windows Central. He is also the head reviewer, podcast co-host, and analyst. He has been covering Microsoft since 2007, when this site was called WMExperts (and later Windows Phone Central). His interests include Windows, laptops, next-gen computing, and watches. He has been reviewing laptops since 2015 and is particularly fond of 2-in-1 convertibles, ARM processors, new form factors, and thin-and-light PCs. Before all this tech stuff, he worked on a Ph.D. in linguistics, watched people sleep (for medical purposes!), and ran the projectors at movie theaters because it was fun.