EU AI Act High-Risk Classification Guide

The EU AI Act's most consequential obligation set applies to high-risk AI systems. Classification as high-risk triggers the full set of Article 8 through 17 requirements including risk management, data governance, technical documentation, quality management system, conformity assessment, CE marking, and post-market monitoring.

For AI companies, accurate classification is the first step in compliance. Misclassification in either direction creates problems: classifying low-risk systems as high-risk creates unnecessary cost; classifying high-risk systems as low-risk creates exposure.

This page covers how to classify AI systems under Article 6 and Annex III of the AI Act.

The two paths to high-risk classification

An AI system is high-risk under the AI Act if either:

  • Path 1: It is intended to be used as a safety component of a product covered by EU harmonization legislation listed in Annex I, or is itself such a product, and is subject to third-party conformity assessment. Examples: AI in medical devices regulated under MDR, AI in machinery regulated under the Machinery Regulation, AI in aviation regulated under EASA.

  • Path 2: It is referred to in Annex III. Annex III lists eight categories of AI use cases that are considered high-risk regardless of the product context.

For most software and SaaS companies, Path 2 (Annex III) is the relevant classification path.

Annex III categories

The eight Annex III categories are:

  • Biometrics. Remote biometric identification, biometric categorization based on sensitive attributes, emotion recognition (outside prohibited contexts).

  • Critical infrastructure. AI systems used as safety components in critical digital infrastructure, road traffic, supply of water, gas, heating, and electricity.

  • Education and vocational training. AI systems used to determine access or admission to education, evaluate learning outcomes, assess student behavior or attention, or steer the learning process.

  • Employment, workers management, and access to self-employment. AI systems used for recruitment (CV filtering, job posting), promotion and termination decisions, task allocation based on behavior or personal characteristics, performance monitoring and evaluation.

  • Access to and enjoyment of essential private and public services and benefits. AI systems used to evaluate eligibility for public benefits, evaluate creditworthiness or credit scoring (excluding fraud detection), assess risk for life and health insurance pricing, dispatching emergency services.

  • Law enforcement. AI systems used by law enforcement for various purposes including risk assessment of natural persons, polygraph and emotion detection, evidence reliability assessment, profiling.

  • Migration, asylum, and border control. AI systems used for various purposes in border and migration contexts.

  • Administration of justice and democratic processes. AI systems used by judicial authorities to research and interpret facts and law, or used to influence the outcome of elections.

The Article 6(3) carve-out

Article 6(3) provides a narrow carve-out from high-risk classification for AI systems referred to in Annex III that do not pose significant risk to fundamental rights. The carve-out applies if the AI system:

  • Is intended to perform a narrow procedural task; or

  • Is intended to improve the result of a previously completed human activity; or

  • Is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without proper human review; or

  • Is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III.

  • If an AI system performs profiling of natural persons, it is always high-risk regardless of the carve-out.

The carve-out must be documented through a specific assessment. Providers relying on the carve-out must register the AI system in the EU database.

Practical classification by tech company sector

HR Tech and EdTech. AI systems used for CV screening, candidate ranking, employee performance evaluation, student assessment, or admissions are almost always high-risk under Annex III categories 3 (education) or 4 (employment). The Article 6(3) carve-out is rarely available for the core use cases.

FinTech with credit scoring or insurance pricing. AI systems used for creditworthiness assessment or insurance pricing are high-risk under Annex III category 5. The carve-out generally does not apply.

Critical infrastructure AI (cloud, telecoms, energy). AI systems used as safety components in critical digital infrastructure are high-risk under Annex III category 2.

Identity verification and KYC. AI systems performing biometric identification may be high-risk under Annex III category 1. Biometric verification used solely for authentication of natural persons against their own claims is generally not high-risk.

Healthcare AI. AI systems used in medical devices are typically high-risk under Path 1 (Annex I) as safety components of MDR-regulated products.

Customer service AI and chatbots. Generally not high-risk unless they perform decisions affecting access to essential services. Subject to Article 50 transparency obligations.

Marketing and personalization AI. Generally not high-risk. May be subject to Article 50 transparency obligations if generating content or interacting with humans in ways requiring disclosure.

Cybersecurity AI. Generally not high-risk unless directly affecting access to services or making decisions about natural persons.

What high-risk classification triggers

Once classified as high-risk, the AI system is subject to:

  • Risk management system (Article 9)

  • Data governance and quality (Article 10)

  • Technical documentation (Article 11)

  • Record-keeping (Article 12)

  • Transparency and provision of information to deployers (Article 13)

  • Human oversight (Article 14)

  • Accuracy, robustness, and cybersecurity (Article 15)

  • Quality management system (Article 17)

  • Conformity assessment (Article 43)

  • CE marking (Article 48)

  • EU database registration (Article 49 for providers, Article 71 for public-facing systems)

  • Post-market monitoring (Article 72)

  • Reporting of serious incidents (Article 73)

This is substantial work. Most companies need 6 to 12 months to implement these requirements for a single high-risk AI system.

How to conduct the classification

Step 1: AI system inventory. List all AI systems your company provides, deploys, or uses. Include both customer-facing and internal AI systems.

Step 2: Annex III screening. For each AI system, identify whether it falls in any Annex III category based on intended use.

Step 3: Article 6(3) carve-out assessment where applicable. If Annex III applies, assess whether one of the four carve-out conditions applies and whether the system involves profiling.

Step 4: Path 1 screening for safety components. For AI systems used in regulated products, identify whether Path 1 high-risk classification applies.

Step 5: Documentation. Document the classification analysis and conclusion for each AI system. This documentation is itself subject to AI Act review and should be retained.

Step 6: Periodic review. Classification must be revisited when AI systems change materially or when intended use changes.

How Engage Compliance helps

AI system classification under the EU AI Act is part of our AI governance work for clients. Specific support includes:

  • AI system inventory across customer-facing and internal systems.

  • Annex III screening and Article 6(3) carve-out assessment.

  • Path 1 safety component analysis for products in regulated sectors.

  • Documentation of classification decisions.

  • Coordinated classification with GDPR analysis where AI systems process personal data.

For complex classifications involving multiple regulated sectors or novel AI architectures, we coordinate with specialist AI safety and regulatory engineering firms.

Get started

If you need AI system classification under the AI Act, book a consultation.