Is Your AI Ready for a DORA Audit?

20 questions. 3 minutes. Get a personalized compliance score and gap analysis for your AI systems under DORA.

Based on 28 DORA regulatory documents · Free · No signup required

6 DORA pillars assessed
18 maturity questions
Personalized gap analysis
Step 1 of 7
About Your Organization
Bank
Investment Firm
Insurance
Payment Provider
Other
None
1–5
6–20
20+
Not Started
Early Stage
In Progress
Advanced
Step 2 of 7 · Pillar 1
ICT Risk Management
DORA Articles 5–14
Our ICT security policies explicitly cover AI systems, including access controls, encryption requirements, and data handling procedures for model training and inference.
Level 0
No coverage
Level 1
Partially documented
Level 2
Documented & implemented
Level 3
Reviewed & tested regularly
We maintain a complete asset inventory that includes AI models, training datasets, data sources, API endpoints, and third-party AI services.
Level 0
No inventory
Level 1
Partial inventory
Level 2
Complete but manual
Level 3
Automated & current
We have vulnerability management processes that specifically address AI/ML infrastructure, including model servers, inference APIs, and vector databases.
Level 0
Not addressed
Level 1
Ad hoc patching
Level 2
Scheduled scanning
Level 3
Continuous monitoring
Step 3 of 7 · Pillar 2
Incident Management
DORA Articles 24–27
We have an AI incident classification framework with defined severity thresholds that align with DORA's major incident criteria.
Level 0
No framework
Level 1
Generic IT only
Level 2
AI-specific criteria
Level 3
DORA-aligned & tested
We can file a regulatory incident report within 24 hours for AI-related incidents, with pre-built templates and defined escalation procedures.
Level 0
No capability
Level 1
Manual process
Level 2
Templates ready
Level 3
Automated & drilled
We track ongoing costs and losses from AI incidents, with methodology aligned to ESA guidelines on cost/loss estimation.
Level 0
Not tracked
Level 1
Estimated post-hoc
Level 2
Structured tracking
Level 3
ESA-aligned methodology
Step 4 of 7 · Pillar 3
Resilience Testing
DORA Articles 26–27, TLPT
Our threat-led penetration testing (TLPT) program includes AI-specific attack vectors such as prompt injection, model extraction, and data poisoning.
Level 0
No TLPT for AI
Level 1
Basic pen testing
Level 2
AI vectors included
Level 3
Full TLPT with AI scope
We have red/blue team capability specifically trained on AI security scenarios, including adversarial prompt attacks and model manipulation.
Level 0
No capability
Level 1
General red team
Level 2
AI-aware team
Level 3
Dedicated AI red team
We conduct scenario exercises that test AI failure modes, including model hallucination cascades, provider outages, and data pipeline failures.
Level 0
No exercises
Level 1
Tabletop only
Level 2
Live simulations
Level 3
Regular chaos testing
Step 5 of 7 · Pillar 4
Third-Party Risk
DORA Articles 28–30
We maintain a Register of Information that includes all AI vendors, model providers, cloud AI services, and their subcontractors.
Level 0
No register
Level 1
Partial listing
Level 2
Complete register
Level 3
Auto-updated & audited
Our AI vendor contracts include DORA-required provisions: audit rights, data access, incident notification, subcontracting restrictions, and exit clauses.
Level 0
Standard T&Cs only
Level 1
Some clauses added
Level 2
DORA-compliant
Level 3
Legally reviewed & tested
We have tested exit strategies for switching AI model providers, including data portability, prompt migration, and embedding re-generation.
Level 0
No exit plan
Level 1
Documented only
Level 2
Plan with timeline
Level 3
Tested & exercised
Step 6 of 7 · Pillar 5
AI-Specific Controls
GenAI under DORA
Every AI-generated decision has a complete audit trail: who requested it, what data was used, which model version ran, and what output was produced.
Level 0
No audit trail
Level 1
Basic logging
Level 2
Structured traces
Level 3
Full reconstruction
We have real-time monitoring for AI systems including model drift detection, anomaly alerting, output quality scoring, and cost tracking.
Level 0
No monitoring
Level 1
Uptime only
Level 2
Quality metrics
Level 3
Full observability
Our AI architecture is model-agnostic: we can switch LLM providers without rewriting business logic, prompts, or integration code.
Level 0
Locked to one vendor
Level 1
Abstraction planned
Level 2
Abstraction layer exists
Level 3
Multi-provider tested
Step 7 of 7 · Pillar 6
Governance
Board-level accountability
Board-level accountability for AI risk is formally established, with a designated DORA officer or committee responsible for AI-related ICT risk.
Level 0
No ownership
Level 1
Informal ownership
Level 2
Formally assigned
Level 3
Board-reported & active
We have a defined AI lifecycle governance process covering development, validation, deployment, monitoring, and decommissioning.
Level 0
No process
Level 1
Informal stages
Level 2
Documented lifecycle
Level 3
Enforced with gates
We conduct regular compliance testing of AI systems with audit-ready evidence packages, including control effectiveness reports and remediation tracking.
Level 0
No testing
Level 1
Annual review
Level 2
Quarterly testing
Level 3
Continuous & automated

Your DORA AI Readiness Score

0%
Overall Score

Score by Pillar

Top Gaps & Recommendations

Get Your Full Report

Email yourself the results to share with your team, or schedule a call to discuss your remediation roadmap.

Close the Gaps Before the Audit

We build AI systems that satisfy DORA requirements from day one. Audit trails, governance, exit readiness — built in, not bolted on.

Schedule Architecture Review