Cisco launches security tools for AI agents

New offerings aim to protect against rogue AI agents on enterprise devices

Mar. 24, 2026 at 11:52am

Cisco has unveiled a number of new security offerings designed to protect against rogue AI agents, as the popular OpenClaw AI agent platform poses potential security risks for companies testing the technology on their enterprise devices.

Why it matters

The ability of AI agents to control computers and perform a wide range of tasks is impressive, but also raises significant security concerns, especially for businesses using the technology. Cisco's new security tools aim to address these risks and provide safeguards against malicious AI agents.

The details

OpenClaw, developed by Peter Steinberger who has since joined OpenAI, allows users to set up their own AI agents that can perform tasks like checking email, replying to messages, and managing system files. While these capabilities are powerful, they also create a potential security nightmare, as the AI models are given control over the user's computer. This can lead to issues like permanently deleting important emails or programs. Cisco's new security offerings are designed to protect against these types of rogue AI agents, especially for companies testing OpenClaw on their enterprise devices.

  • Cisco unveiled the new security tools on Monday.

The players

Cisco

A multinational technology conglomerate that designs, manufactures, and sells networking hardware, software, telecommunications equipment, and other high-technology services and products.

OpenClaw

A popular AI agent platform that allows users to set up their own AI agents to perform a variety of tasks on their computers.

Peter Steinberger

The developer of the OpenClaw AI agent platform, who has since joined OpenAI.

Got photos? Submit your photos here. ›

The takeaway

Cisco's new security offerings highlight the growing need to address the potential security risks posed by the increasing use of AI agents in both consumer and enterprise settings. As these technologies become more advanced and widespread, robust security measures will be crucial to protect against malicious or unintended use of AI agents.