Bing Dall-E 3 image creation was great for a few days, but now Microsoft has predictably lobotomized it 🥴
Fearing controversy, Microsoft ramps up filtering on Bing's image creator tool.
What you need to know
- Bing is Microsoft's scarcely-used search engine, baked into Windows and Bing.com.
- Microsoft has integrated some of Open AI's generative systems into Bing, in attempts to court users away from Google.
- While it hasn't helped Bing's search volume, it has generated some additional interest in the platform.
- Recently, Bing's AI image creator tool got a big upgrade, moving to the Dall-E 3 algorithm.
- The powerful tool can create realistic images from simple prompts, but after some controversies, Microsoft has baked in heaps of censorship. Microsoft's own "random" image generator button even censors itself.
Bing has gotten a lot more useful these days, and not necessarily for search.
Microsoft has made no secret of the fact that Bing trails Google in search volume, to the tune of roughly 3% global market share. Despite being baked into Windows, users seek the greener pastures of Google, which indisputably produces more accurate, up-to-date results in most scenarios. Bing is fine for basic search queries, however, and remains a solid option if for no reason other than its generous Microsoft Rewards points program, which offers vouchers in exchange for using Bing. Generative AI has also given Bing a bit of a boost recently.
Microsoft signed a huge partnership with OpenAI to bake ChatGPT conversational language tools and Dall-E image creation systems right into the search engine. Dall-E is also coming to Microsoft Paint in the future, and ChatGPT-powered assistance has emerged directly in Windows 11 with Windows Copilot.
Read more: Why Microsoft won't be the company that mainstreams AI
Bing's Image Creator got a huge boost in power recently, thanks to the new Dall-E 3 algorithm. The quality of the pictures generated is exponentially better than previous versions, although it comes with some controversies.
Disney was recently approached to comment after Yahoo! ran a story on how Bing was able to generate images of "Mickey Mouse causing 9/11." Indeed, the first few days of Dall-E 3 on Bing were something uniquely typical of this type of tech. Microsoft is no stranger to this type of controversy. The firm has been in hot water for previous AI efforts after a previous chatbot iteration was manipulated by users into becoming racist.
Guardrails are important for this type of tech, which has the potential to generate not just offensive images, but also defamatory, misleading, or even illegal material. However, some users think that Microsoft may have gone just a little bit too far.
Get the Windows Central Newsletter
All the latest news, reviews, and guides for Windows and Xbox diehards.
Bing censors itself
While writing this article (wholly by myself and without ChatGPT, tyvm), I sought to generate a banner with the prompt "man breaks server rack with a sledgehammer," but Bing decided that such an image was in violation of its policies. Last week, I was able to generate Halloween images of popular copyrighted characters in violent zombie apocalypse scenarios. You could argue both of these prompts have some violent context that Microsoft would prefer to do without, but users are finding that even innocuous prompts are being censored.
Bing Image Creator has a "surprise me" randomizer button, by which it creates an image of its own choosing to present to you. However, Bing Image Creator is also censoring its own creations. I was able to reproduce the situation myself quite easily, roughly 30% of the time.
I clicked "Surprise me" and this is what i gotten. Too bad. from r/OpenAI
Another user was locked out after requesting "a cat with a cowboy hat and boots," which Bing now considers to be offensive, for some reason. Users have reported being banned for requesting ridiculous, albeit safe-for-work image manipulations of celebrities, such as "dolly parton performing at a goth sewer rave."
As of writing, Bing is giving me a "Thank you for your patience. The team is working hard to fix the problem. Please try again later," message, suggesting that the service is either overloaded or being tweaked further.
Balancing fun, function, and filters
One of the biggest challenges Microsoft will face with its AI technology tools is filtration. It's something Microsoft will have to nail if it wants to be one of the companies that brings AI to the mainstream.
Right now, it's arguable that Bing and Open AI have gone too far with censorship when truly innocuous prompts return negative feedback. Last week, I was able to generate a range of cartoony zombie apocalypse fan art, but this week, that's too "controversial" for Bing, resulting in blocked prompts. If you get too many warnings, you can even be banned from the service, which seems silly in of itself when the guidelines are fairly opaque and vague.
If Bing and Windows Copilot by extension can only generate sanitized results, it defeats the point of the tool kit. Human society and life isn't always "brand safe," and Microsoft's squeamish attitude to even the vaguest hints of controversy will undermine its efforts to mainstream this sort of technology. You can't revise history, sadly, if you want to maintain accuracy. It'll be interesting to see how Microsoft and its competitors seek to balance fun, and functionality, with filtration — and how potential bad actors will see opportunities in jailbroken versions of this sort of tech.
You can try Bing Image Creator yourself, right here.
Jez Corden is the Executive Editor at Windows Central, focusing primarily on all things Xbox and gaming. Jez is known for breaking exclusive news and analysis as relates to the Microsoft ecosystem while being powered by tea. Follow on Twitter (X) and Threads, and listen to his XB2 Podcast, all about, you guessed it, Xbox!
-
fjtorres5591 This isn't new.Reply
I tried the earlier version and it refused all sorts of variations of "very tall young blonde next to a normal height man". After three attempts it blocked me.
They might have fixed that but I don't care.
None of the image generators will be of any productive use until they add reproducibility, at a minimum. For controlled iteration.
What's the value of prompting a scene that gets one thing right but another horribly wrong if trying to fic what's wrong produces something totally different and likely to be worse?
It's just a toy, a cute tech demo.
At least the text tools are somewhat useful.