Cisco debuts AI defense to combat misuse of AI tools, data leakage, and sophisticated threats — despite Sam Altman's confidence in AI's ability to prevent existential doom even with a 99.999999% probability

Cisco Systems headquarters in San Jose, California, US
Cisco Systems headquarters in San Jose, California. (Image credit: Getty Images | Bloomberg)

Cisco's recent "AI Defense" announcement comes while security and privacy are among the major challenges riddling the progression of AI; the potential of the technology ending humanity remains a major concern for everyone, including regulators and the government. To this end, regulation and policies governing the exploration of generative AI have remained slim at best, making its chances of veering off the rails extremely high.

AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, Roman Yampolskiy, indicated that AI has a 99.999999% probability of ending humanity, according to p(doom), and the only way around it is not to build IA in the first place. Interestingly, there's seemingly a light at the end of the tunnel.

For context, AI Defense is a sophisticated security solution designed to protect the development and use of AI-powered apps, allowing enterprises to develop AI securely.

In an exclusive interview with The Rundown AI's Rowan Cheung, Cisco Executive Vice President and Chief Product Officer Jeetu Patel discussed the rapid progression of AI and the security and protection concerns, prompting AI Defense's release:

"The reality is that there will be two types of companies in the future: those who are leading with AI and those that are irrelevant. Every company will be using - if not developing - thousands of AI applications, and the rapid pace of AI innovation is outperforming major security and protection concerns.

We developed AI Defense to protect both the development and use of AI applications. Overall, AI Defense safeguards against the misuse of AI tools, data leakage, and increasingly sophisticated threats. We are taking a radical approach to address the challenges that existing security solutions are not equipped to handle."

AI Defense is arguably the closest security solution to the technology's existential threats. Perhaps more concerning, Cisco's 2024 AI Readiness report indicates that only "29% of those surveyed feel fully equipped to detect and prevent unauthorized tampering with AI."

This could be attributed to the fact that the AI landscape is relatively new and complex. Additionally, AI apps are multi-model and multi-cloud, making them more susceptible to security attacks since they can be deployed at the model or app level.

Read More: Here's what AGI means to Microsoft and OpenAI

Will Cisco monitor AGI progress?

Getty Images 2017676996

Multiple companies are racing to hit the Artificial General Intelligence (AGI) benchmark. (Image credit: Getty Images | Andriy Onufriyenko)

The launch of the security solution couldn't come at a better time as top AI labs, including Anthropic and OpenAI, race to hit the coveted AGI benchmark. OpenAI CEO Sam Altman indicated that his team knows how to build AGI and that the benchmark would be achieved sooner than anticipated as the company shifts focus and gears to superintelligence.

Related: Employee claims OpenAI already achieved AGI with o1 reasoning model's release

Despite security and safety concerns, Altman claimed the benchmark would whoosh by with surprisingly little societal impact. He added that the expressed security concerns wouldn't be seen or experienced during the AGI moment. However, recent reports indicated that AI development and progression might have hit a wall due to a lack of high-quality content for model training. Key players in the industry, including Sam Altman and ex-Google CEO Eric Schmidt, disputed the claims, indicating there's no evidence scaling laws have begun to stunt the progression of AI. "There's no wall," added Altman.

While AI Defense is a step in the right direction, its adoption across organizations and major AI labs remains to be seen. Interestingly, the OpenAI CEO acknowledges the technology's threat to humanity but believes AI will be smart enough to prevent AI from causing existential doom.

CATEGORIES
Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You'll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.