As AI-made photos are the headlines for all the wrong reasons, Google releases image generation for Bard to compete with Microsoft Copilot

Google Bard on Android
(Image credit: Future)

What you need to know

  • Google just released image generation for its AI tool, Bard.
  • Bard competes with Microsoft Copilot, which already had an image creation tool.
  • AI image generation has been in the headlines recently because AI was used to make fake sexually explicit images of Taylor Swift that were then shared by many on X (formerly Twitter).

Microsoft Copilot has a new competitor in the image creation game. Google just announced that Bard can now generate images. Powered by Google's Imagin 2 model, the feature allows you to type a description and then have AI create an image. The functionality is similar to DALL-E 3, but Google's new option is free.

Google also announced other upgrades to Bard and the expansion of Gemini Pro to more regions. In total, Bard is available in 40 languages across 230 countries and territories. Our colleagues over at Android Central covered today's announcement from Google, so I'll point you in that direction for general news and a breakdown of all of today's updates. I'll focus on how Bard competes with Microsoft's tools and the timing of Google's announcement.

Google Bard vs Microsoft Copilot

Google Bard announcement blog

Google Bard competes even more with Microsoft's Copilot now that Google's tool supports image generation. (Image credit: Future)

When Microsoft announced Bing Chat (now called Copilot), the tech giant caused a stir over at Google. While Google dominates the search game, the tech giant didn't want to lose out on the AI race. Thus, Google allegedly rushed Bard, announcing the product in March 2023. A $100 billion blunder later, Google was on its way to having its own AI tool for the masses.

At the time, it seemed like AI would take off, and it many ways it has, but not in the way may expected back then. Bing market share was stagnant last year, despite Microsoft investing billions into the search engine. Microsoft has since shifted its AI tools away from the Bing brand, favoring "Copilot" instead.

AI is still a major part of Microsoft's plans, but it seems that AI integration into Office, Edge, and other Microsoft services will be more important. Google may run into similar roadblocks and situations.

AI-enhanced search engines will still be around, but it seems that AI creation will see more attention from tech giants like Google and Microsoft. We'll have to see how Google's image generation stacks up against Microsoft's in real-world usage.

A dangerous time for AI

Google Bard on a smartphone

Google Bard can now generate images, and Google has put guardrails in place to restrict sexually explicit content. (Image credit: Nicholas Sutrich / Android Central)

Unfortunately, it's difficult to talk about AI-generated images these days without mentioning the fake images of Taylor Swift made with artificial intelligence tools. While those tools aren't related to Google directly, Google released image generation for Bard in the immediate aftermath of one of the world's most famous celebrities being the victim of AI creation.

To catch you up, several sexually explicit images of Taylor Swift flooded X (formerly Twitter) recently. All of the photos were fake, having been made with AI image generation technology. 404 Media report explained that members of a "Telegram group dedicated to abusive images of women" created the fake images of Taylor Swift using Microsoft's AI image generator (likely Image Creator from Designer).

The saga drew criticism to AI creation of pornographic images made without the consent of the person those images depict. Microsoft CEO Satya Nadella said the images "set alarm bells off" about what can be done to restrict AI and called situation "alarming and terrible."

Now, Google will have to see if it can limit the creation of similar images with its tools. The company highlighted its guardrails in its post announcing the new capability for Bard:

"Our technical guardrails and investments in the safety of training data seek to limit violent, offensive or sexually explicit content. Additionally, we apply filters designed to avoid the generation of images of named people. We’ll continue investing in new techniques to improve the safety and privacy protections of our models."

Of course, Microsoft and other AI companies have guardrails in place as well. The reality is that people will always find ways around limits like these. Even in a hypothetical situation where Microsoft and Google's AI models could not be used to make sexually explicit images, people could use other AI models to do so.

That fact makes it important for companies like OpenAI to continue work on its tool that can identify AI-generated images with 99% reliability. While the fake images will still exist, they could be flagged up on social media for removal and lower the spread of false information.

CATEGORIES
Sean Endicott
News Writer and apps editor

Sean Endicott is a tech journalist at Windows Central, specializing in Windows, Microsoft software, AI, and PCs. He's covered major launches, from Windows 10 and 11 to the rise of AI tools like ChatGPT. Sean's journey began with the Lumia 740, leading to strong ties with app developers. Outside writing, he coaches American football, utilizing Microsoft services to manage his team. He studied broadcast journalism at Nottingham Trent University and is active on X @SeanEndicott_ and Threads @sean_endicott_. 

  • John McIlhinney
    With regards to AI photos in the headlines for the wrong reasons, there was a great example of that here in Australia in the last few days. One of our main FTA TV networks posted a picture of a female politician online and she tweeted to draw attention to the fact that the picture had been unjustifiably altered. According to the apology from the TV station, it was a stock photo of her that they ran through an AI enhancement tool in Photoshop and it turned her dress into a top and skirt that showed some skin in between and enlarged her bosom a little. It wasn't so over the top that anyone would have noticed or raised an eyebrow but I'm guessing that she knew that she didn't own the outfit shown in the picture so went to compare it to original. It's not like it made her look like a sex kitten or the like but it was still an example of unwanted and unnecessary changes that suggest that women aren't good enough as they are that would be unlikely to happen to men. Obviously, AI is trained on stuff that's really out there and things that photographers and publishers want to do, so the assumption seems to be that women always need to be made to look a particular way regardless of the context.
    Reply