The European AI Act entered into force in August 2024. The first obligations have applied since February 2025 (prohibitions on unacceptable-risk AI systems). Obligations for high-risk systems arrive in August 2026. That is not far off.
Yet the majority of technical teams have not yet done the work of mapping their systems and their obligations. This is understandable: the text is dense, awaited guidance is slow to materialise, and the exact scope of certain categories remains unclear. But “the text is complex” is not a compliance strategy.
Here is what technical teams need to understand and anticipate now.
The risk-based classification logic
The AI Act classifies AI systems into four categories according to their risk level. This classification determines your obligations.
Unacceptable risk: prohibited
Systems that manipulate behaviour subliminally, generalised social scoring by public authorities, real-time biometric identification systems in public spaces (with narrow exceptions). These categories have been prohibited since February 2025. If you have this type of system, it is a legal emergency, not a technical one.
High risk: heavy obligations
This is where most obligations for private companies are concentrated. An AI system is classified as high risk if it is deployed in domains listed in Annex III: critical infrastructure, education, employment, access to essential public services, law enforcement, migration management, administration of justice.
For the financial sector in particular: systems used for assessing the creditworthiness of natural persons, credit scoring, and decisions on access to financial products are explicitly listed.
Limited risk: transparency obligations
Chatbots, content generation systems: obligation to inform the user that they are interacting with an AI system. This is often what companies have already put in place, but the regulation makes it mandatory.
Minimal risk
The vast majority of AI applications: spam filters, content recommendation, video games. No specific obligations beyond general law.
What “high risk” implies technically
If your system is classified as high risk, here are the main technical obligations:
Risk management system
Not a one-off assessment, but a continuous system for identifying, analysing and addressing risks throughout the lifecycle. This includes:
- Testing before market placement and commissioning
- Post-deployment monitoring plan
- Process for updating when new risks are identified
Data governance
Training, validation and testing data must meet documented governance practices:
- Identification of potential biases and mitigation measures
- Traceability of datasets used (provenance, preprocessing applied)
- For personal data: integrated GDPR compliance (not separate)
Automatic logging
High-risk systems must automatically record relevant events during their operation, “to the extent necessary to enable the identification of risks.”
In practice: each significant decision by the system must be traceable with its inputs, outputs, the model used, the version, and the timestamp. This is not operational monitoring. It is regulatory auditability. The retention period depends on the use case (minimum: system lifetime + 10 years for certain categories).
Transparency and documentation
A technical file must accompany each high-risk system. It includes:
- General description of the system and its intended use
- Design elements: architecture, operating logic
- Description of training data
- Risk assessment
- Performance metrics and acceptable thresholds
- Cybersecurity measures
This documentation must be kept up to date and made available to supervisory authorities on request.
Human oversight
High-risk systems must be designed to allow effective human oversight. This requires:
- Interfaces enabling human operators to monitor the system’s behaviour
- The ability to intervene, correct or stop the system
- Alerts when the system operates outside its normal parameters
- The ability to safely disable the system
This is not just a box to tick: it is a design requirement. A system that makes decisions in a fully autonomous manner without an identifiable human control point will be difficult to make compliant.
Obligations for providers of general-purpose AI models (GPAI)
If you use an LLM as a component of a system you provide (not solely for internal use), you are subject to the obligations on general-purpose AI models.
For models with “systemic impact” (> 10^25 FLOPs of training, typically large foundation models), obligations include adversarial evaluations and incident reporting obligations.
For most uses, obligations relate to:
- Documentation of the model’s architecture and capabilities
- Acceptable use policy
- Information on training data (within the limits of trade secrets)
If you use a third-party foundation model (OpenAI, Anthropic, Mistral, Google) as a building block in your system, the model provider has its own obligations, but you remain responsible for the use you make of it in your system.
Compliance by design: why this is different from GDPR
The GDPR introduced the concept of “Privacy by Design”, integrating data protection from the design stage. The AI Act does the same for AI compliance.
Why this is harder than GDPR:
The GDPR mainly concerns organisational processes and rules. A competent DPO can often audit and correct practices without touching the code.
The AI Act concerns properties of the system itself: accuracy, robustness, traceability, supervision. These properties cannot be retrofitted. They must be architected in.
Concretely: a credit scoring system deployed without decision logging cannot become compliant without an overhaul of the logging system. A system that does not allow human oversight cannot allow it without an overhaul of its control architecture.
This is why “we’ll address it when the deadline arrives” is a losing strategy. The time needed for an overhaul is not negligible.
The timeline of obligations to keep in mind
| Date | Obligation |
|---|---|
| August 2024 | Regulation enters into force |
| February 2025 | Prohibitions on unacceptable-risk systems |
| August 2025 | Obligations on general-purpose AI models (GPAI) |
| August 2026 | Obligations on high-risk systems |
| August 2027 | Extension to existing regulated systems |
August 2026 for high-risk systems is 14 months away. The compliance cycle for an existing system (mapping, gap analysis, redesign, testing, documentation) easily takes 12 to 18 months.
Where to start: 3 priority actions
1. Map your AI systems Inventory all systems that use AI (including bricks purchased from third parties). For each: intended use, domain of application, impacted population. Cross-referenced with Annex III: which one is potentially high risk?
2. Document what exists For each system identified as potentially high risk: retrieve available technical documentation. Identify gaps (insufficient logging, absence of documented performance metrics, no defined human oversight procedure).
3. Prioritise technical workstreams Documentation gaps are filled quickly. Architectural gaps (logging, oversight, traceability) take time. It is on these that the gap analysis must produce an action plan now.
Conclusion
The AI Act is not a regulation like any other. It concerns fundamental technical properties of systems, not just organisational processes. For teams developing AI systems in sensitive domains, compliance must be a design criterion, not a final check.
The 14 months remaining before obligations on high-risk systems are sufficient, provided you start now.
Do you have AI systems in production or under development in a regulated sector? Let’s assess your compliance level together.