The Legal Risk Behind Every AI Query
NIS2, DORA, and the AI Act were written separately. They converge on one demand most AI architectures were never built to answer.

Three regulatory frameworks are now live across the European Union. NIS2 transposition deadlines passed in October 2024. DORA has applied to financial entities since January 2025. The AI Act's obligations are phasing in through 2025, 2026, and into 2027. Most organizations are treating these as three separate compliance projects, managed by three separate teams, producing three separate sets of documentation.
That approach misses the point. These frameworks were written by different bodies, for different sectors, on different timelines. Yet they converge on one demand: prove, at any moment, exactly where your data is, how it is processed, and by what system.
One Demand, Three Frameworks
Picture the moment this convergence becomes real. A regulator sits in your compliance office for a supervisory review. She points to the AI system your team deployed six months ago. Three questions: where was this query processed? Which data sources did the system access to generate the answer? Can you prove that no data left EU jurisdiction during any step? Your compliance officer reaches for the vendor contract. The regulator is not interested in contracts. She wants logs, architecture diagrams, technical proof.
NIS2: Supply Chain Accountability
NIS2 expanded the list of covered sectors to include digital infrastructure, cloud computing, and datacenter services. Organizations already subject to NIS2 that build AI systems on top of these components inherit the exposure. The regulation demands supply chain risk management: direct suppliers and service providers must be assessed, documented, and monitored for the cybersecurity risks they introduce. When your AI routes queries to a third-party API for inference, that provider is a direct supplier in your supply chain. Can you document its security posture and verify it independently?
DORA: Auditability and Resilience
DORA raises the bar for financial entities. ICT third-party dependencies must be not only documented but auditable and resilient. The regulation specifically targets concentration risk. If every AI process in your organization routes through the same external provider, that is a single point of failure DORA requires you to measure, report, and mitigate. A single outage, a policy change, a geopolitical shift, and your entire AI capability goes dark.
The AI Act: Traceability
The AI Act adds traceability. For high-risk AI systems, it requires automatic logging of operational events, transparency toward deployers about how the system works, and human oversight sufficient to interpret its outputs. When a regulator asks why the system gave a particular answer, "the model generated it" is not an acceptable response. You need to show which documents were retrieved, how they were selected, and how the response was constructed.
Where the Three Collide
Now consider where the three collide. NIS2 asks: do you know your supply chain? DORA asks: are your dependencies resilient and auditable? The AI Act asks: can you trace how your AI reached its conclusion? A financial institution running document analysis on a third-party API cannot fully satisfy any of these, because the critical processing happens inside infrastructure it does not control and cannot inspect. One blind spot fails all three frameworks simultaneously.
The enforcement signals leave no ambiguity. NIS2 penalties for essential entities reach €10 million or 2% of global turnover, whichever is higher. The AI Act scales to €35 million or 7% for prohibited-practice violations. The €1.2 billion GDPR fine imposed on a major technology platform in 2023 was not for a breach but for transferring personal data to a third country without adequate safeguards under Chapter V GDPR. The precedent is set: European regulators will act, and the distance between "our vendor says" and "we can prove" is exactly where enforcement will land.
Architecture as Compliance
The architectural implication is simple. Retrofitting auditability onto an AI system not designed for it is not a matter of cost. It is a matter of possibility. You cannot add provable data residency to a system that depends on third-party APIs. You cannot produce audit trails for processes running inside infrastructure you do not control. The architecture either supports these requirements from its foundation, or it does not.
Axelered was built with auditability as an architectural primitive. Every stage of the pipeline, from document ingestion through embedding generation, retrieval, and answer generation, produces verifiable, inspectable records. Processing runs entirely on infrastructure the organization controls, making data residency a demonstrable fact rather than a contractual claim. There is no external API to audit and no third-party subprocessor to assess.
Every answer traces back to specific paragraphs in specific documents. When a regulator asks how the system reached a conclusion, the organization shows the source material, the retrieval logic, and the generation path. These records are not compliance reports assembled after the fact. They are byproducts of how the system operates: supply chain simplification for NIS2, concentration risk elimination for DORA, output traceability for the AI Act.
Compliance is not a constraint on AI adoption. It is a design requirement. The organizations that will navigate this regulatory convergence most effectively are not the ones assembling the largest compliance teams. They are the ones whose architecture makes the regulator's question trivial to answer, because the system was built to answer it from day one.