This is the largest EU AI Act compliance study of MCP servers ever conducted. We scanned every server in the MCP Registry — 11,529 in total — against 207 security and compliance detection patterns across 15 languages. The results paint a clear picture: the MCP ecosystem is not ready for EU AI Act enforcement.
For context: Enkrypt AI published a study in March 2026 claiming they scanned 1,000 MCP servers and found 33% had critical vulnerabilities. Our dataset is 11.5x larger, our scanner covers 207 patterns (vs. their undisclosed number), and we are the only ones mapping findings directly to EU AI Act articles.
7.4% of all MCP servers in the public registry have at least one finding that maps to an EU AI Act compliance gap. That's 850 servers used by thousands of developers and organizations.
The three most common violation categories:
Note: 86 servers exhibited findings across multiple articles. Individual counts sum to 936, reflecting overlap.
Zero of the 850 flagged servers had any form of EU AI Act compliance documentation. Not a risk classification. Not a conformity self-assessment. Nothing. The ecosystem is operating as if the regulation does not exist.
The EU AI Act (Regulation (EU) 2024/1689) enters its critical enforcement phase on August 2, 2026. After that date, organizations deploying high-risk AI systems without proper compliance documentation face fines of up to 15 million EUR or 3% of global annual turnover for non-compliance with high-risk requirements (Article 99(3)).
MCP servers are particularly relevant because they serve as the interface layer between AI models and external tools, data sources, and services. Under the EU AI Act's risk classification framework, any MCP server that processes personal data, makes decisions affecting individuals, or operates in a high-risk domain (healthcare, finance, legal, HR) falls under Article 6 / Annex III requirements.
Data source: MCP Registry (registry.mcphub.io), complete crawl as of March 2026. 11,529 registered MCP server entries with metadata, tool descriptions, README content, and configuration files.
Scanner: ClawGuard Shield v0.7.3 — 207 deterministic regex-based detection patterns with 10 preprocessing stages (leetspeak normalization, zero-width stripping, homoglyph detection, base64 decoding, cross-line joining, and more).
Languages: Patterns operate across 15 languages: English, German, French, Spanish, Italian, Dutch, Polish, Portuguese, Turkish, Japanese, Korean, Chinese, Arabic, Hindi, and Russian.
Accuracy: F1 score of 98.0% on our benchmark dataset of 264 test cases (publicly available on GitHub). False positive rate: 1.2% on the benchmark dataset. Note: real-world false positive rates on unseen server data may differ.
Compliance mapping: Each detection pattern is mapped to one or more EU AI Act articles (Art. 9 Risk Management, Art. 13 Transparency, Art. 15 Robustness, Annex III High-Risk Classification).
Reproducibility: The scanner is open source (MIT License). The registry dashboard with live results is available at prompttools.co/registry.
Article 9 requires providers of high-risk AI systems to establish, implement, document, and maintain a risk management system. This includes identifying known and foreseeable risks, estimating their likelihood and severity, and adopting suitable risk management measures.
What we found: 438 servers (51.5% of all flagged) exhibited patterns indicating absent or inadequate risk management. Common issues:
Article 13 mandates that high-risk AI systems be designed to ensure their operation is sufficiently transparent, including clear documentation of capabilities, limitations, and intended use.
What we found: 312 servers (36.7% of flagged) had transparency gaps:
Article 15 requires high-risk AI systems to be resilient against errors, faults, and attempts by unauthorized third parties to exploit vulnerabilities.
What we found: 186 servers (21.9% of flagged) had robustness issues:
Note: Some servers have findings in multiple categories. The category counts sum to more than 850 because a single server can have multiple distinct issues.
| Study | Servers Scanned | Detection Method | EU AI Act Mapping | Languages | Reproducible |
|---|---|---|---|---|---|
| ClawGuard (this report) | 11,529 | 207 patterns, F1 98.0% | Art. 9, 13, 15, Annex III | 15 | Open source (MIT) |
| Enkrypt AI (Mar 2026) | 1,000 | AI-powered (undisclosed) | No | 1 | Proprietary |
| Snyk Agent-Scan | N/A (per-project) | LLM + rules | No | 1 | Open source |
| Invariant MCP-Scan | N/A (per-project) | Hash pinning | No | 1 | Open source |
Our 7.4% flagged rate is lower than Enkrypt AI's reported 33% — and that's by design. A high flagged rate is not a badge of honor. It typically indicates either a broad definition of "issue" or a high false positive rate.
Our scanner operates at F1 = 98.0% with a false positive rate of 1.2%. Every flagged server has a specific, actionable finding mapped to a concrete EU AI Act article. We deliberately prioritize precision over recall: it is better to flag 850 servers with high confidence than to flag 3,800 servers where half are false positives.
The 92.6% of servers that passed does not mean they are fully compliant. It means our 207 patterns did not detect issues. Compliance is a broader legal and organizational question — but at minimum, failing an automated scan is a strong indicator that deeper review is needed.
The EU AI Act's high-risk provisions (Article 6, Annex III) take effect on August 2, 2026. That is 134 days from publication of this report.
Based on early implementation reports from EU AI Act compliance consultancies (including Compliance & Risks and EU Commission guidance), achieving full compliance for a high-risk AI system takes 32 to 56 weeks. This includes:
For organizations starting today, the math is tight. For those who haven't started, it may already be too late for a fully documented conformity assessment before the deadline.
Organizations using MCP servers in high-risk domains (healthcare, finance, HR, legal) have 134 days to achieve compliance. Minimum required timeline: 32 weeks (224 days). The gap is already 90 days negative. Starting now with an automated scan is the only way to compress the timeline.
207 security patterns. 15 languages. EU AI Act compliance mapping.
Open source. Under 10ms per scan. No API key required for basic scans.
Wir haben 11.529 MCP-Server auf EU AI Act Compliance gescannt. 850 Server (7,4%) wurden mit Sicherheits- oder Compliance-Problemen markiert. Kein einziger der betroffenen Server hatte irgendeine Form von EU AI Act Dokumentation.
Drei Hauptkategorien: Fehlende Risikodokumentation (Art. 9, 438 Server), unzureichende Transparenz (Art. 13, 312 Server) und Robustheitslücken (Art. 15, 186 Server). 86 Server wiesen Findings in mehreren Kategorien auf.
Deadline: Die Hochrisiko-Vorschriften des EU AI Act treten am 02.08.2026 in Kraft. Die Mindestvorlaufzeit fuer Compliance betraegt 32 Wochen. Organisationen, die jetzt beginnen, haben bereits ein Zeitdefizit von 90 Tagen.
Unsere Studie ist 11,5x groesser als die naechstgroessere (Enkrypt AI: 1.000 Server) und die einzige, die Findings direkt auf EU AI Act Artikel mappt. Der Scanner ist Open Source (MIT) und reproduzierbar.
Kostenlos scannen: prompttools.co/shield
This study has limitations. We want to be transparent about them:
Future work: We plan to publish quarterly updates to this report as the August 2, 2026 deadline approaches. We are also working on runtime behavioral analysis (dynamic scanning) and expanding our EU AI Act article coverage to include Articles 10 (Data Governance), 12 (Record-Keeping), and 14 (Human Oversight).
Joerg Michno is the creator of ClawGuard, an open-source security scanner for AI agent integrations. ClawGuard focuses on EU AI Act compliance scanning with 207 detection patterns across 15 languages. The project is maintained at prompttools.co.
For press inquiries, data access, or collaboration: security@prompttools.co