A week since launch, OpenAI's ChatGPT has shown the power, and horror, of AI
Having AI at your fingertips has its pros and cons, as shown off over the last week.
The concept of artificial intelligence is nothing new. In fact, there's a good chance that you've used something that relied on AI in the last 24 hours. But when OpenAI launched ChatGPT last week, it lowered the entry requirements for using AI.
ChatGPT is a chatbot that's accessible through any web browser. It's designed to be interacted with using natural language that feels like a conversation.
Microsoft's Azure AI supercomputing infrastructure is used to train the GPT-3.5 models that power ChatGPT. OpenAI and Microsoft announced a partnership back in 2019 that included a $1 billion investment from Microsoft and OpenAI exclusively using Azure as its cloud provider.
With ChatGPT in preview, we have real-world examples of how everyday folks will use AI, and that's both inspiring and horrifying. This week, BleepingComputer has shared some examples of the best and worst things that can be done with ChatGPT. The examples below were shared by BleepingComputer, and illustrate the range of the new AI chat tool.
Using AI for good
Like any tool, ChatGPT can be used for good, evil, and anything in between. OpenAI designed ChatGPT to be able to debug code, and it appears to do so very well. It even suggested a fix and explained why that fix was needed in the first example below. ChatGPT can also detect security vulnerabilities.
ChatGPT could be a good debugging companion; it not only explains the bug but fixes it and explain the fix 🤯 pic.twitter.com/5x9n66pVqjNovember 30, 2022
No way 🤯, OpenAI can actually detect XSS vulnerabilities in code samples. pic.twitter.com/Ti8x91nxSYDecember 1, 2022
AI isn't just about coding and getting work done. Security expert Ken Westin played around with ChatGPT's ability to write while impersonating a person's style.
TIL I might be a Replicant. pic.twitter.com/VOLI2pQxljDecember 5, 2022
The downside of AI
Of course, there are downsides and dangerous aspects of artificial intelligence. Microsoft President Brad Smith has often discussed the need to regulate and legislate AI. His argument appears to have merit after looking at just a few examples of ChatGPT's darker side.
Get the Windows Central Newsletter
All the latest news, reviews, and guides for Windows and Xbox diehards.
The power of AI being so accessible opens several cans of worms. For example, scammers can use AI to create convincing phishing emails. With the same tools that can be used to create beneficial software, ChatGPT can be used to write malware.
AI also has a problem with bias. Sexism, racism, and other types of bigotry can be worked into AI models.
Yes, ChatGPT is amazing and impressive. No, @OpenAI has not come close to addressing the problem of bias. Filters appear to be bypassed with simple tricks, and superficially masked. And what is lurking inside is egregious. @Abebab @samatw racism, sexism. pic.twitter.com/V4fw1fY9dYDecember 4, 2022
OpenAI is open about some of the issues surrounding bias. "While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior," explained OpenAI.
"We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system."
ChatGPT can also be led to create responses that are offensive, hurtful, or misleading. These can range from Ultron-esque responses about humans being inferior and wishing for the extinction of the human race to writing offensive song lyrics.
Thise situations require human input, of course, but the powerful tools provided by ChatGPT make certain things easier to create. Twitter user Front Runner shared an example of an essay written by ChatGPT that makes an immoral argument (sensitive content warning).
A quickly moving industry
AI moves as quickly, if not more swiftly, than any other industry. Technology in the space improves at an astonishing rate, arguably faster than legislation or moderation can handle. The week that followed the launch of ChatGPT illustrates the diverse range of tasks AI can be used to perform.
Securing the future of AI will require those behind the technology to create tools for keeping things under control, as well as restraint by individuals that use artificial intelligence.
Sean Endicott is a tech journalist at Windows Central, specializing in Windows, Microsoft software, AI, and PCs. He's covered major launches, from Windows 10 and 11 to the rise of AI tools like ChatGPT. Sean's journey began with the Lumia 740, leading to strong ties with app developers. Outside writing, he coaches American football, utilizing Microsoft services to manage his team. He studied broadcast journalism at Nottingham Trent University and is active on X @SeanEndicott_ and Threads @sean_endicott_.