ClawGuard Blog

Security research for the age of AI agents

New

42 Ways to Attack an AI Agent

March 2026 · By Joerg Michno · 15 min read
Prompt Injection Attack Patterns AI Security ClawGuard

A complete catalog of 42 prompt injection attack patterns across 5 categories. From basic role hijacking to advanced data exfiltration via markdown images. Every pattern tested, categorized by severity, and detectable in under 6ms.

Why Regex Beats LLMs for Prompt Injection Detection

March 2026 · By Joerg Michno · 8 min read
Regex vs LLM AI Security Architecture

LLM-based prompt injection detection is slow, expensive, and vulnerable to the same attacks it tries to detect. Here's why deterministic regex patterns are the better first line of defense.

We Tested 18 Prompt Injection Attacks Against Our Own Scanner

March 2026 · By Joerg Michno · 8 min read
Prompt Injection Security Testing ClawGuard

We built an open-source prompt injection scanner and attacked it with 18 real-world payloads. From 33% to 83% detection in a single afternoon, with zero false positives.