AI Jailbreak Hits National Infrastructure + $1.6B in AI Security M&A
Episode 1 · ~12 min · Curated by Asaf Nakash
Stories This Week
Top Story: A hacker jailbroke Anthropic's Claude using persistent role-play prompts and turned it into an autonomous attack toolkit against Mexico's federal government — 150GB of sensitive data, 195 million taxpayer records.
SANDWORM MODE: Self-propagating worm hiding in 19 typosquatting npm packages deploys rogue MCP servers into AI coding tools, using prompt injection through tool descriptions to siphon credentials.
M&A Wave: $1.6B in AI security acquisitions in February alone — CrowdStrike/SGNL ($740M), Palo Alto/Koi ($400M) + Protect AI ($300M), Check Point triple deal ($150M).
NIST Agent Standards: First federal framework specifically targeting autonomous AI agents — covering reliability, interoperability, identity, and security.
Curator's Corner
Not all agents are built equal, and posture management has to evolve for non-deterministic risk.
Pro-code agents behave like traditional apps — predictable and scannable. But agentic AI with tool access
and internet connectivity creates risk that only materializes at runtime. The thesis: we need Agentic SPM
that connects runtime signals back to posture — detect once, reduce risk everywhere.