Back to blogAI

EU AI Act August 2026: key obligations for high-risk AI systems

uComply

Team uComply

Author

March 11, 2026

Published

EU AI Act August 2026: are you ready for the most important deadline?

The EU AI Act (Regulation (EU) 2024/1689) entered into force on August 1, 2024, and is being implemented in phases. On August 2, 2026, we reach the most critical milestone: from that date, obligations for high-risk AI systems apply. For many organizations, this means the time to act is truly running out.

AI Act timeline

The AI Act follows a phased implementation:

  • August 1, 2024 — The regulation enters into force
  • February 2, 2025 — Ban on AI systems with unacceptable risk (social scoring, manipulative AI, mass facial recognition) and AI literacy obligation
  • August 2, 2025 — Rules for general-purpose AI models (such as large language models), governance structure and sanctions framework
  • August 2, 2026 — All obligations for high-risk AI systems under Annex III, conformity assessment, CE marking and EU database registration
  • August 2, 2027 — Obligations for high-risk AI in regulated products (medical devices, machinery)
  • August 2, 2030 — Requirements for high-risk AI systems at public sector organizations that were placed on the market before August 2026
  • What changes on August 2, 2026?

    This is the date when most rules actually take effect. Specifically:

    Obligations for providers of high-risk AI

  • Risk management system — Continuous identification, assessment and mitigation of risks throughout the AI system lifecycle
  • Data governance — Requirements for quality, representativeness and bias-free training data
  • Technical documentation — Comprehensive documentation on design, development and operation, retained for at least 10 years
  • Logging and traceability — Automatic recording of system activities
  • Transparency — Clear instructions for use for deployers
  • Human oversight — The AI system must be designed for effective human oversight
  • Accuracy, robustness and cybersecurity — Demonstrable performance requirements
  • Conformity assessment — Mandatory assessment before market placement
  • CE marking — Required for high-risk AI systems
  • EU database registration — Registration in the European database for high-risk AI
  • Obligations for deployers of high-risk AI

  • Use according to provider instructions
  • Organize competent human oversight
  • Monitor operations and report risks
  • Retain log files for at least 6 months
  • For certain public sector use: conduct a fundamental rights impact assessment
  • AI regulatory sandboxes

    Each EU Member State must have at least one operational AI sandbox by August 2, 2026 — a controlled environment where organizations can develop and test innovative AI systems.

    Which AI systems are high-risk?

    The AI Act classifies AI systems into four risk categories:

    CategoryRegulationExamples
    **Unacceptable risk**Prohibited (since Feb 2025)Social scoring, manipulative AI, mass facial recognition
    **High risk**Strictly regulated (from Aug 2026)Biometrics, critical infrastructure, education, recruitment, credit scoring, law enforcement
    **Limited risk**Transparency obligationsChatbots, deepfakes, emotion recognition
    **Minimal risk**No specific requirementsSpam filters, AI in games

    High-risk AI systems include applications in:

  • Biometrics — Identification and categorization of persons
  • Critical infrastructure — Energy, water, transport
  • Education — Admission, student assessment
  • Employment — Recruitment, selection, performance evaluation
  • Essential services — Credit scoring, insurance
  • Law enforcement — Risk assessment, evidence analysis
  • Migration and border control — Visa applications, asylum requests
  • What are the fines?

    Sanctions are substantial, comparable to GDPR:

    ViolationMaximum fine
    Prohibited AI practicesUp to EUR 35 million or 7% of global annual turnover
    Other obligations (high-risk)Up to EUR 15 million or 3% of global annual turnover
    Providing incorrect informationUp to EUR 7.5 million or 1% of global annual turnover

    Important: for SMEs and startups, the *lower* of the two amounts applies.

    Conformity assessment: two routes

    Before a high-risk AI system may be placed on the market, a conformity assessment is mandatory. There are two routes:

    1.Internal control (Annex VI) — The provider self-assesses whether the system meets all requirements. This applies to most high-risk categories. No external body required, but full documentation is mandatory.
    2.Assessment by notified body (Annex VII) — An external party audits the quality management system and technical documentation. Required for biometric AI systems when harmonized standards are not fully applied.

    What should you do now? A practical checklist

    With less than 5 months until the deadline, it's time to start now:

    Step 1: AI inventory

    Map all AI systems in your organization. Also consider AI embedded in existing software and third-party services.

    Step 2: Risk classification

    Determine which risk category each AI system falls into. Use the Annex III criteria to establish whether your systems qualify as high-risk.

    Step 3: Gap analysis

    Compare your current situation with AI Act requirements. Where are the biggest gaps in documentation, risk management and governance?

    Step 4: Establish governance

    Define responsibilities. Who is accountable for AI compliance? How is human oversight organized?

    Step 5: Documentation and registration

    Start building technical documentation, risk assessments and log records. This takes time and must be thorough.

    Step 6: Prepare conformity assessment

    Determine which route applies and start preparation in time.

    The link with ISO/IEC 42001

    ISO/IEC 42001 is the international standard for AI management systems. This standard provides a structured framework that closely aligns with AI Act requirements. By implementing ISO/IEC 42001, you build a solid foundation for AI Act compliance:

  • Risk management for AI systems
  • Governance and responsibilities
  • Documentation and traceability
  • Continuous improvement of AI processes
  • Learn more about ISO/IEC 42001 and what this standard means for your organization.

    How uComply helps with AI Act compliance

    uComply helps organizations accelerate demonstrable compliance — including for the AI Act. With the [ISO/IEC 42001](/normen/iso42001) Content Pack in uComply, you get immediate access to:

  • Ready-made controls and measures based on ISO/IEC 42001, aligned with AI Act requirements
  • Risk analysis templates specific to AI systems
  • Policy documents and procedures for AI governance
  • Dashboards providing real-time insight into your compliance status
  • Integration with existing standards — Combining the AI Act with ISO 27001, NIS2 or other standards? uComply prevents duplicate work by automatically linking shared controls
  • Additionally, the uComply team supports organizations with consultancy for setting up and implementing AI compliance programs, including gap analyses and internal audits.

    Conclusion

    The AI Act is no longer a future concern — the regulation has been in force since 2024 and the most important obligations apply from August 2, 2026. Organizations that provide or use high-risk AI systems must take action now. The combination of ISO/IEC 42001 and a solid compliance management system like uComply makes the difference between being compliant on time and falling behind.

    Want to know where your organization stands? Schedule a free demo and discover how uComply helps you demonstrably comply with the AI Act.