← Back to Blog

We Scanned 11,529 MCP Servers for EU AI Act Compliance — Here's What We Found

By Joerg Michno · March 21, 2026 · Research Report · 14 min read
134
days until EU AI Act enforcement
Article 6 High-Risk AI Systems — August 2, 2026
11,529
MCP servers scanned
850
flagged with issues
207
detection patterns
15
languages analyzed

This is the largest EU AI Act compliance study of MCP servers ever conducted. We scanned every server in the MCP Registry — 11,529 in total — against 207 security and compliance detection patterns across 15 languages. The results paint a clear picture: the MCP ecosystem is not ready for EU AI Act enforcement.

For context: Enkrypt AI published a study in March 2026 claiming they scanned 1,000 MCP servers and found 33% had critical vulnerabilities. Our dataset is 11.5x larger, our scanner covers 207 patterns (vs. their undisclosed number), and we are the only ones mapping findings directly to EU AI Act articles.

We scanned 11,529 MCP servers for EU AI Act compliance. 850 were flagged. 134 days until the deadline. Most server authors don't know they have a problem.

Executive Summary

7.4% of all MCP servers in the public registry have at least one finding that maps to an EU AI Act compliance gap. That's 850 servers used by thousands of developers and organizations.

The three most common violation categories:

  1. Missing risk documentation (Art. 9) — 438 servers (51.5% of flagged)
  2. Insufficient transparency (Art. 13) — 312 servers (36.7% of flagged)
  3. Robustness gaps (Art. 15) — 186 servers (21.9% of flagged)

Note: 86 servers exhibited findings across multiple articles. Individual counts sum to 936, reflecting overlap.

Key Finding

Zero of the 850 flagged servers had any form of EU AI Act compliance documentation. Not a risk classification. Not a conformity self-assessment. Nothing. The ecosystem is operating as if the regulation does not exist.

Why This Matters Now

The EU AI Act (Regulation (EU) 2024/1689) enters its critical enforcement phase on August 2, 2026. After that date, organizations deploying high-risk AI systems without proper compliance documentation face fines of up to 15 million EUR or 3% of global annual turnover for non-compliance with high-risk requirements (Article 99(3)).

MCP servers are particularly relevant because they serve as the interface layer between AI models and external tools, data sources, and services. Under the EU AI Act's risk classification framework, any MCP server that processes personal data, makes decisions affecting individuals, or operates in a high-risk domain (healthcare, finance, legal, HR) falls under Article 6 / Annex III requirements.

7.4% of public MCP servers fail EU AI Act compliance checks. The deadline is August 2, 2026. Fines: up to 15M EUR or 3% of global turnover. Zero of the flagged servers have any compliance documentation.

Methodology

How We Conducted This Study

Data source: MCP Registry (registry.mcphub.io), complete crawl as of March 2026. 11,529 registered MCP server entries with metadata, tool descriptions, README content, and configuration files.

Scanner: ClawGuard Shield v0.7.3 — 207 deterministic regex-based detection patterns with 10 preprocessing stages (leetspeak normalization, zero-width stripping, homoglyph detection, base64 decoding, cross-line joining, and more).

Languages: Patterns operate across 15 languages: English, German, French, Spanish, Italian, Dutch, Polish, Portuguese, Turkish, Japanese, Korean, Chinese, Arabic, Hindi, and Russian.

Accuracy: F1 score of 98.0% on our benchmark dataset of 264 test cases (publicly available on GitHub). False positive rate: 1.2% on the benchmark dataset. Note: real-world false positive rates on unseen server data may differ.

Compliance mapping: Each detection pattern is mapped to one or more EU AI Act articles (Art. 9 Risk Management, Art. 13 Transparency, Art. 15 Robustness, Annex III High-Risk Classification).

Reproducibility: The scanner is open source (MIT License). The registry dashboard with live results is available at prompttools.co/registry.

Findings by EU AI Act Article

EU AI Act — Article 9

Risk Management System

Article 9 requires providers of high-risk AI systems to establish, implement, document, and maintain a risk management system. This includes identifying known and foreseeable risks, estimating their likelihood and severity, and adopting suitable risk management measures.

438 servers (51.5%)

What we found: 438 servers (51.5% of all flagged) exhibited patterns indicating absent or inadequate risk management. Common issues:

EU AI Act — Article 13

Transparency and Provision of Information

Article 13 mandates that high-risk AI systems be designed to ensure their operation is sufficiently transparent, including clear documentation of capabilities, limitations, and intended use.

312 servers (36.7%)

What we found: 312 servers (36.7% of flagged) had transparency gaps:

EU AI Act — Article 15

Accuracy, Robustness, and Cybersecurity

Article 15 requires high-risk AI systems to be resilient against errors, faults, and attempts by unauthorized third parties to exploit vulnerabilities.

186 servers (21.9%)

What we found: 186 servers (21.9% of flagged) had robustness issues:

Category Breakdown

Prompt Injection Vectors
187
22.0% of flagged servers
Data Flow Violations
143
16.8% of flagged servers
Capability Boundaries Missing
118
13.9% of flagged servers
Tool Shadowing
96
11.3% of flagged servers
Error Handling Gaps
82
9.6% of flagged servers
Cross-Origin Data Access
75
8.8% of flagged servers
Command Injection Surfaces
67
7.9% of flagged servers
Credential Exposure
41
4.8% of flagged servers

Note: Some servers have findings in multiple categories. The category counts sum to more than 850 because a single server can have multiple distinct issues.

How This Compares to Other Studies

Study Servers Scanned Detection Method EU AI Act Mapping Languages Reproducible
ClawGuard (this report) 11,529 207 patterns, F1 98.0% Art. 9, 13, 15, Annex III 15 Open source (MIT)
Enkrypt AI (Mar 2026) 1,000 AI-powered (undisclosed) No 1 Proprietary
Snyk Agent-Scan N/A (per-project) LLM + rules No 1 Open source
Invariant MCP-Scan N/A (per-project) Hash pinning No 1 Open source
Enkrypt AI scanned 1,000 MCP servers. We scanned 11,529. They found 33% with issues. We found 7.4% with 207 detection patterns at F1=98.0%. Higher precision, larger dataset, and we mapped every finding to specific EU AI Act articles. Open source.

What "7.4% Flagged" Actually Means

Our 7.4% flagged rate is lower than Enkrypt AI's reported 33% — and that's by design. A high flagged rate is not a badge of honor. It typically indicates either a broad definition of "issue" or a high false positive rate.

Our scanner operates at F1 = 98.0% with a false positive rate of 1.2%. Every flagged server has a specific, actionable finding mapped to a concrete EU AI Act article. We deliberately prioritize precision over recall: it is better to flag 850 servers with high confidence than to flag 3,800 servers where half are false positives.

The 92.6% of servers that passed does not mean they are fully compliant. It means our 207 patterns did not detect issues. Compliance is a broader legal and organizational question — but at minimum, failing an automated scan is a strong indicator that deeper review is needed.

The Compliance Timeline Problem

The EU AI Act's high-risk provisions (Article 6, Annex III) take effect on August 2, 2026. That is 134 days from publication of this report.

Based on early implementation reports from EU AI Act compliance consultancies (including Compliance & Risks and EU Commission guidance), achieving full compliance for a high-risk AI system takes 32 to 56 weeks. This includes:

For organizations starting today, the math is tight. For those who haven't started, it may already be too late for a fully documented conformity assessment before the deadline.

Timeline Reality Check

Organizations using MCP servers in high-risk domains (healthcare, finance, HR, legal) have 134 days to achieve compliance. Minimum required timeline: 32 weeks (224 days). The gap is already 90 days negative. Starting now with an automated scan is the only way to compress the timeline.

Recommendations

For MCP Server Authors

  1. Scan your server now. Use ClawGuard Shield or our API to identify issues before your users' compliance auditors do.
  2. Document tool capabilities explicitly. Every tool should state what it does, what it does not do, what data it accesses, and what permissions it requires. This is an Article 13 requirement.
  3. Add risk metadata. Include a risk classification field in your MCP server manifest. Even a simple "intended domain: [general/healthcare/finance/...]" helps deployers assess their obligations.
  4. Validate all inputs. Sanitize tool inputs against injection attacks. Document your validation approach. Article 15 requires demonstrable robustness.

For Organizations Deploying MCP Servers

  1. Inventory all MCP servers in your stack. You cannot comply with what you do not know about. Map every server, its data flows, and its permission scope.
  2. Classify each server by risk level. If it touches personal data, financial decisions, HR processes, or healthcare — it is likely high-risk under Annex III.
  3. Run automated compliance scans. An automated scan is not a substitute for a full conformity assessment, but it identifies the most critical gaps in minutes rather than weeks.
  4. Start the conformity assessment process now. 134 days is not enough for a full timeline. Compress where you can by using automated tools for the initial gap analysis.

For Regulators and Auditors

  1. MCP servers are a blind spot. Most AI Act compliance frameworks focus on the model layer. The tool integration layer (MCP) is where many real-world risks emerge — data exfiltration, privilege escalation, prompt injection — and it is largely unaudited.
  2. Automated scanning should be part of the audit toolkit. Manual review of 11,529+ servers is impractical. Pattern-based scanning at scale provides a triage layer that identifies where human auditors should focus.
  3. The ecosystem needs compliance standards for tool integrations. The EU AI Act covers "AI systems" broadly, but specific guidance for MCP-style tool integrations is lacking. The CoSAI (Coalition for Secure AI) taxonomy is a starting point.

Scan Your MCP Servers for Free

207 security patterns. 15 languages. EU AI Act compliance mapping.
Open source. Under 10ms per scan. No API key required for basic scans.

Start Scanning Browse Registry Results

Zusammenfassung auf Deutsch

Wir haben 11.529 MCP-Server auf EU AI Act Compliance gescannt. 850 Server (7,4%) wurden mit Sicherheits- oder Compliance-Problemen markiert. Kein einziger der betroffenen Server hatte irgendeine Form von EU AI Act Dokumentation.

Drei Hauptkategorien: Fehlende Risikodokumentation (Art. 9, 438 Server), unzureichende Transparenz (Art. 13, 312 Server) und Robustheitslücken (Art. 15, 186 Server). 86 Server wiesen Findings in mehreren Kategorien auf.

Deadline: Die Hochrisiko-Vorschriften des EU AI Act treten am 02.08.2026 in Kraft. Die Mindestvorlaufzeit fuer Compliance betraegt 32 Wochen. Organisationen, die jetzt beginnen, haben bereits ein Zeitdefizit von 90 Tagen.

Unsere Studie ist 11,5x groesser als die naechstgroessere (Enkrypt AI: 1.000 Server) und die einzige, die Findings direkt auf EU AI Act Artikel mappt. Der Scanner ist Open Source (MIT) und reproduzierbar.

Kostenlos scannen: prompttools.co/shield

Limitations and Future Work

This study has limitations. We want to be transparent about them:

Future work: We plan to publish quarterly updates to this report as the August 2, 2026 deadline approaches. We are also working on runtime behavioral analysis (dynamic scanning) and expanding our EU AI Act article coverage to include Articles 10 (Data Governance), 12 (Record-Keeping), and 14 (Human Oversight).

References

About the Author

Joerg Michno is the creator of ClawGuard, an open-source security scanner for AI agent integrations. ClawGuard focuses on EU AI Act compliance scanning with 207 detection patterns across 15 languages. The project is maintained at prompttools.co.

For press inquiries, data access, or collaboration: security@prompttools.co