EU AI Act GPAI Compliance

The EU AI Act imposes specific obligations on providers of General Purpose AI models, separately from the risk-based classification framework that applies to AI systems. For foundation model providers, GPAI obligations under Articles 53 to 55 have been in force since August 2, 2025. Compliance for GPAI models placed on the market before August 2, 2025 is required by August 2, 2027.

This page covers what GPAI obligations require, who is in scope, and how to build compliance.

What is a General Purpose AI model

The AI Act defines a General Purpose AI model as an AI model trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks, regardless of the way the model is placed on the market.

In practice, the definition captures large language models, foundation models for text, image, audio, and multimodal generation, and other models intended for broad task generality.

The Act distinguishes between GPAI models generally and GPAI models with systemic risk. The latter face additional obligations under Article 55.

GPAI providers in scope

Article 53 obligations apply to providers placing GPAI models on the market in the EU. The obligations apply regardless of where the provider is established. Major US, UK, and Asian foundation model providers are in scope when their models are made available in the EU.

The Act includes specific exceptions for free and open source GPAI models, with reduced obligations where the model is released under a free and open source license that allows access, use, modification, and distribution, and the model's parameters including weights and information on the model architecture and on model usage are publicly available.

Article 53 obligations

Providers of GPAI models must:

  • Draw up and keep up-to-date technical documentation of the model. The documentation must include details specified in Annex XI: general description of the model, training data summary, training process, computational resources used, energy consumption, intended use, performance metrics, and design specifications.

  • Make available information and documentation to providers of AI systems intending to integrate the GPAI model. The information must enable downstream providers to understand the model's capabilities and limitations and to comply with their own obligations under the Act.

  • Put in place a policy to comply with EU copyright law. The policy must address copyright opt-outs by rightsholders, including the machine-readable opt-out mechanism under Article 4(3) of the Copyright Directive 2019/790.

  • Draw up and make publicly available a sufficiently detailed summary about the content used for training of the GPAI model. The summary must follow the template provided by the AI Office.

These obligations apply from August 2, 2025 for new GPAI models. For GPAI models placed on the market before August 2, 2025, compliance is required by August 2, 2027.

Article 55 obligations for GPAI with systemic risk

GPAI models with systemic risk face additional obligations:

  • Model evaluation. Perform model evaluation in accordance with standardized protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risk.

  • Systemic risk assessment and mitigation. Assess and mitigate possible systemic risks at the Union level, including their sources, that may stem from the development, placing on the market, or use of GPAI models with systemic risk.

  • Serious incident tracking and reporting. Keep track of, document, and report to the AI Office and where appropriate to national competent authorities, without undue delay, relevant information about serious incidents and possible corrective measures.

  • Cybersecurity protection. Ensure an adequate level of cybersecurity protection for the GPAI model with systemic risk and the physical infrastructure of the model.

How systemic risk is determined

A GPAI model is presumed to have systemic risk if it has high impact capabilities evaluated based on appropriate technical tools and methodologies, including indicators and benchmarks. The presumption applies when the cumulative amount of computation used for training the model exceeds 10^25 floating point operations (FLOPs).

The European Commission can designate other GPAI models as having systemic risk based on additional criteria including parameter count, dataset quality and size, business model, market presence, and other factors.

Providers must notify the Commission without delay when their GPAI model meets the systemic risk threshold.

Code of Practice

The AI Office is developing a Code of Practice for GPAI providers. Adherence to the Code of Practice creates a presumption of conformity with Article 53 and Article 55 obligations.

Drafts of the Code of Practice have been circulated through 2025 and 2026. Major foundation model providers have engaged with the drafting process.

For GPAI providers, adherence to the Code of Practice is a practical compliance path that simplifies regulator engagement.

Penalties

Penalties for GPAI providers reach up to 15 million euros or 3 percent of global annual turnover for GPAI-specific violations.

Practical GPAI compliance

For most GPAI providers, the compliance work involves:

  • Technical documentation drafting. Comprehensive documentation following Annex XI requirements. Significantly more detailed than typical model cards.

  • Training data summary preparation. Following the AI Office template once published.

  • Copyright compliance policy. Documented policy covering opt-out compliance, rightsholder mechanism, and review processes.

  • Downstream provider information package. Documentation and information made available to integrators.

  • Systemic risk assessment if applicable. Detailed risk assessment with mitigation measures.

  • Model evaluation and red teaming if applicable. Documented adversarial testing.

  • Cybersecurity controls if applicable. Specific controls for systemic risk models.

  • Code of Practice alignment. Engagement with and adherence to the AI Office Code of Practice.

How Engage Compliance helps

For GPAI providers, Engage Compliance supports GPAI-specific obligations alongside broader GDPR and AI Act compliance work. Specific support includes:

  • GPAI applicability and systemic risk threshold analysis.

  • Technical documentation drafting per Annex XI.

  • Copyright compliance policy design.

  • Training data summary preparation.

  • Downstream provider documentation.

  • Code of Practice alignment.

  • Coordination with model evaluation specialists for systemic risk requirements.

  • GDPR compliance for personal data used in training and inference.

For specialist work including systemic risk model evaluation, adversarial testing, and AI safety engineering, we coordinate with specialist AI safety firms.

Get started

If you are a GPAI provider evaluating Article 53 or Article 55 compliance, book a consultation.