EU AI Act Compliance Services

The EU AI Act is the world's first comprehensive AI regulation. Staged implementation since August 2024. As of mid-2026, prohibited practices have been banned since February 2025, General Purpose AI obligations have been in force since August 2025, and the August 2, 2026 high-risk AI system deadline remains legally binding after the Digital Omnibus delay proposal failed on April 28, 2026.

If you build, deploy, or place on the market AI systems used in the EU, the AI Act applies to you. This page covers what the AI Act requires, the urgent 2026 deadlines, and how to build compliance.

Does the AI Act apply to you

The AI Act applies to providers of AI systems placed on the market or put into service in the EU, deployers of AI systems located in the EU, providers and deployers of AI systems whose output is used in the EU, and importers and distributors of AI systems in the EU.

Most AI companies serving EU customers are in scope as providers. Companies using AI tools internally on EU employees or customers may be in scope as deployers.

The AI Act applies regardless of where your company is headquartered, similar to GDPR's extra-territorial reach.

AI system risk categories

The Act categorizes AI systems into four risk levels:

  • Prohibited AI practices (Article 5). Banned since February 2, 2025. Examples include cognitive behavioral manipulation, social scoring by public authorities, real-time biometric identification in public spaces by law enforcement (with narrow exceptions), and emotion recognition in workplaces and educational institutions.

  • High-risk AI systems (Article 6, Annex III). Subject to the most extensive requirements. Annex III lists eight categories including critical infrastructure, education, employment (including CV screening, performance monitoring, hiring decisions), essential private and public services, law enforcement, migration and border control, justice and democratic processes, and biometrics. Many SaaS products fall in scope, particularly HR Tech, EdTech, FinTech credit scoring, and insurance underwriting.

  • Limited risk AI systems (Article 50). Subject to transparency obligations. Examples include chatbots, emotion recognition outside prohibited contexts, AI-generated content (deepfakes), and biometric categorization.

  • Minimal risk AI systems. Most AI systems fall here with no specific obligations beyond voluntary codes of conduct.

General Purpose AI models

Separately from the risk classification, providers of General Purpose AI models (foundation models that can be adapted to many tasks) face specific obligations under Articles 53 to 55.

GPAI obligations applicable since August 2, 2025 include technical documentation, copyright compliance documentation including respect for opt-outs by rightsholders, training data summary publication, and downstream provider support.

GPAI models with systemic risk (typically those with very large computational training) face additional Article 55 obligations including model evaluation, systemic risk assessment and mitigation, serious incident reporting, and cybersecurity protection.

Providers of GPAI models placed on the market before August 2, 2025 must be compliant by August 2, 2027.

High-risk AI system requirements

High-risk AI systems face the most extensive Act requirements:

  • Risk management system. Continuous risk identification, evaluation, and mitigation throughout the AI system lifecycle.

  • Data governance. Training, validation, and testing data must be relevant, representative, free of errors, complete, and have appropriate statistical properties.

  • Technical documentation. Article 11 requires extensive documentation enabling assessment of compliance.

  • Record-keeping. Article 12 requires automatic logging of events.

  • Transparency. Information to deployers including instructions for use, characteristics, performance metrics, and limitations.

  • Human oversight. Article 14 requires human oversight measures appropriate to the risk.

  • Accuracy, robustness, and cybersecurity. Article 15 requirements.

  • Quality management system. Article 17 requires a quality management system covering compliance strategy, design specifications, data management, risk management, post-market monitoring, and incident reporting.

  • Conformity assessment. Pre-market conformity assessment before placing the system on the market.

  • CE marking and EU database registration. Public-facing high-risk AI systems must bear CE marking and be registered in the EU database under Article 71.

  • Post-market monitoring. Article 72 ongoing monitoring requirements.

  • Serious incident reporting. Article 73 requirements for reporting serious incidents to authorities.

Penalties

Up to 35 million euros or 7 percent of global annual turnover for prohibited AI practices.

Up to 15 million euros or 3 percent for high-risk system violations.

Up to 7.5 million euros or 1.5 percent for other infractions.

Combining AI Act with GDPR

AI systems that process personal data are subject to both the AI Act and GDPR. The two frameworks have meaningful overlap.

GDPR Article 35 DPIAs and AI Act Article 27 fundamental rights impact assessments share substantial content and can be conducted as a single coordinated assessment.

GDPR Articles 13 and 14 transparency obligations and AI Act Article 50 transparency obligations are complementary.

GDPR Article 22 automated decision-making rights overlap with AI Act high-risk system requirements.

Most growing AI companies handle both through a coordinated privacy and AI governance function, often led by the DPO with AI-specific specialists engaged for technical work.

How Engage Compliance helps

EU AI Act compliance is included in our DPO services for AI companies. Specific work includes:

  • AI system inventory and classification (prohibited, high-risk, limited, minimal, GPAI).

  • Risk management system design for high-risk systems.

  • Technical documentation drafting per Article 11.

  • Quality management system design per Article 17.

  • Fundamental rights impact assessment.

  • Transparency obligations design.

  • GPAI compliance for foundation model providers.

  • Coordination with notified bodies for conformity assessment where required.

Get started

If you are an AI company evaluating or executing EU AI Act compliance, book a consultation. The August 2, 2026 deadline is approaching for high-risk systems; preparation work typically takes 6 to 12 months.