The Hacker News - ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts

Malicious actors can exploit default configurations in ServiceNow's Now Assist generative artificial intelligence (AI) platform and leverage its agentic capabilities to conduct prompt injection attacks. The second-order prompt injection, according to AppOmni, makes use of Now Assist's agent-to-agent discovery to execute unauthorized actions, enabling attackers to copy and exfiltrate sensitive

from The Hacker News https://thehackernews.com/2025/11/servicenow-ai-agents-can-be-tricked.html

Comments

Popular posts from this blog

KnowBe4 - Scam Of The Week: "When Users Add Their Names to a Wall of Shame"

The Hacker News - ⚡ Weekly Recap: WhatsApp 0-Day, Docker Bug, Salesforce Breach, Fake CAPTCHAs, Spyware App & More

The Hacker News - Iranian Hackers Launch ‘SpearSpecter’ Spy Operation on Defense & Government Targets