It is January 2026. The initial dust surrounding the enactment of the EU AI Regulation (EU AI Act) has settled. The bans on unacceptable risks have been effective for almost a year, and the rules for General Purpose AI (GPAI) have been in force since August 2025. However, for most enterprises, the most critical phase is beginning right now. In August 2026, the 24-month transition period for high-risk AI systems under Annex III comes to an end. This means: In just under seven months, systems in areas such as HR, critical infrastructure, or credit scoring must be fully compliant. Companies still stuck in the analysis phase risk losing market access.
Key Takeaways
- The deadline for high-risk AI systems according to Annex III expires in August 2026.
- A robust Risk Management System (RMS) must now be operational and fully documented.
- Data governance is no longer just an IT topic but a central compliance requirement for training, validation, and testing data.
- Technical documentation must be completed before placing the system on the market, not just in time for an audit.
Operational Challenges
The clock is ticking relentlessly. While many GRC professionals focused primarily on identifying and inventorying their AI landscape in 2025, 2026 demands a hard transition into operational implementation. It is no longer sufficient to know which systems are classified as high-risk. The focus now lies entirely on the demonstrability of compliance.
One of the biggest practical hurdles currently appearing is the Quality Management System (QMS). The AI Act requires not just an isolated QMS for AI, but ideally its integration into existing structures such as ISO 9001 or ISO 42001. Many companies are discovering that their existing software development processes lack the granularity required by the legislator for AI systems. In particular, the documentation of the entire lifecycle – from the first design decision to the post-market monitoring strategy – often reveals gaps during audits.
Another critical point is data governance. For high-risk AI systems that train models, the regulation prescribes strict criteria regarding data quality. Datasets must be relevant, representative, free of errors, and complete. In practice, this is a massive challenge, as historical data was often not collected with these aspects in mind. GRC teams must now work closely with data scientists to conduct bias analyses and close gaps in data lineage. If proof of training data quality is missing, the conformity of the entire system is at risk.
Furthermore, the human factor must not be underestimated. The requirement for Human Oversight dictates that the individuals supervising AI systems must possess the necessary competence to do so. This means that training measures must start now. It is not enough to pro forma designate an employee as an overseer; they must be capable of recognizing malfunctions and stopping the system if necessary (“kill switch”).
The coming months will be characterized by high pressure on internal departments. Legal, IT Security, and Compliance must finally break down their silos. An integrated GRC approach that treats AI risks not as an isolated technical problem but as a company-wide governance topic is the only way to master the August 2026 deadline without operational disruptions.
FAQ
When exactly does the transition period for high-risk AI systems end?
For most high-risk AI systems falling under Annex III of the regulation (e.g., systems in education, employment, critical infrastructure), the transition period ends on August 2, 2026. All requirements must be met by this date.
What happens if a company misses the deadline?
Systems that are not compliant may no longer be placed on the market or put into service after the deadline. Additionally, severe fines apply, which can amount to up to 35 million euros or 7 percent of the total worldwide annual turnover, depending on the infringement.
Do all AI systems need to be certified?
No. Many high-risk AI systems are subject to an internal conformity assessment. Mandatory third-party assessment by a Notified Body is primarily required for specific systems, particularly those utilizing biometrics.



