GDPR and EU AI Act: How They Overlap and Where They Don't

The EU AI Act creates a second EU regulatory framework for technology companies that handle personal data. For AI companies, this means dual compliance: GDPR for personal data processing, AI Act for AI systems. The two frameworks have meaningful overlap in some areas and significant differences in others.

This page covers what each framework requires, where the work overlaps, and how AI companies should approach combined compliance.

What each framework is

GDPR is EU privacy law applicable from 2018. It regulates the processing of personal data of EU residents, with maximum fines of 20 million euros or 4 percent of global annual turnover. It applies regardless of whether the data is processed by an AI system or otherwise.

The EU AI Act is EU regulation of AI systems, entered into force August 1, 2024 with staged implementation. Prohibited AI practices became applicable February 2, 2025. General Purpose AI model obligations became applicable August 2, 2025. High-risk AI system obligations become applicable August 2, 2026. Compliance for legacy GPAI models placed on the market before August 2, 2025 is required by August 2, 2027.

Maximum penalties under the AI Act are 35 million euros or 7 percent of global annual turnover for prohibited practices, 15 million euros or 3 percent for high-risk violations, and 7.5 million euros or 1.5 percent for other infractions.

As of mid-2026, the European Commission's Digital Omnibus proposal to delay the August 2, 2026 high-risk system deadline failed at trilogue on April 28, 2026. The deadline remains legally in force.

What each framework regulates

GDPR regulates personal data processing. It does not regulate AI systems specifically. Any AI system that processes personal data is subject to GDPR. AI systems that do not process personal data (for example, a forecasting model trained only on weather data) are not subject to GDPR.

The EU AI Act regulates AI systems, defined broadly as machine-based systems designed to operate with varying levels of autonomy that infer from inputs how to generate outputs that influence physical or virtual environments. The AI Act applies to AI systems regardless of whether they process personal data. A facial recognition system processes personal data and is regulated by both GDPR and the AI Act. A predictive maintenance system for industrial equipment may be regulated by the AI Act only.

The AI Act categorizes AI systems by risk:

  • Prohibited AI practices (Article 5). Examples include cognitive behavioral manipulation, social scoring by public authorities, real-time biometric identification in public spaces by law enforcement (with narrow exceptions), and emotion recognition in workplaces and educational institutions. These are banned.

  • High-risk AI systems (Article 6 and Annex III). Examples include AI in critical infrastructure, education, employment (including CV screening), essential services (credit, insurance), law enforcement, migration, justice, and democratic processes. These face the most extensive requirements.

  • Limited risk AI systems. Subject to transparency obligations (Article 50). Examples include chatbots, emotion recognition outside prohibited contexts, and AI-generated content.

  • Minimal risk AI systems. Most AI systems fall in this category and have no specific obligations under the AI Act.

  • General Purpose AI models. Foundation models that can be adapted to many tasks. Subject to specific obligations under Articles 53 through 55.

Where they overlap

The overlap between GDPR and the AI Act is significant for AI systems that process personal data:

  • Data Protection Impact Assessments. GDPR Article 35 requires DPIAs for high-risk processing. The AI Act requires fundamental rights impact assessments for high-risk AI systems used by certain public entities and providers of public service. The two assessments share substantial content and can be coordinated.

  • Transparency obligations. GDPR Articles 13 and 14 require transparency about processing including automated decision-making. AI Act Article 50 requires transparency about AI-generated content and AI interactions. Both require informing affected individuals about how AI is being used.

  • Automated decision-making. GDPR Article 22 regulates solely automated decisions with significant effects on individuals. AI Act high-risk system requirements substantially overlap including human oversight, accuracy, and contestability.

  • Data governance for training. GDPR requires lawful basis and quality of personal data used in training AI systems. AI Act Article 10 requires data governance for high-risk systems including data quality, relevance, and statistical properties.

  • Record keeping. GDPR Article 30 requires Records of Processing Activities. AI Act Article 12 requires automatic logging of AI system events. Different content but similar operational pattern.

  • Accountability and documentation. Both frameworks require documented evidence of compliance.

  • Cross-border governance. Both frameworks have extraterritorial reach for systems serving EU residents.

Where they do not overlap

GDPR specifically covers privacy aspects that the AI Act does not address:

  • Lawful basis for processing personal data. The AI Act assumes lawful processing under other applicable law including GDPR.

  • Data subject rights specific to personal data. Right to access, rectification, erasure, portability, restriction, and objection under GDPR Articles 15 through 22. The AI Act does not create equivalent rights specifically for AI use of personal data.

  • International data transfers. GDPR Chapter V requirements for transfers of personal data outside the EEA. The AI Act does not address data transfers.

  • Special category data. GDPR Article 9 requirements for processing health, biometric, race, religion, and other special categories. The AI Act has specific requirements for certain AI systems using biometric data but does not address other special categories specifically.

  • DPO appointment. GDPR Article 37 requirements.

The AI Act covers AI-specific obligations that GDPR does not address:

  • Risk classification of AI systems. Determining whether an AI system is prohibited, high-risk, limited risk, or minimal risk.

  • Conformity assessment. Pre-market conformity assessment for high-risk AI systems, similar to product safety conformity assessment.

  • Technical documentation. AI Act Article 11 requires extensive technical documentation including system architecture, datasets, validation results, and risk management.

  • Quality management system. AI Act Article 17 requires high-risk AI system providers to have a quality management system covering compliance strategy, design specifications, data management, risk management, post-market monitoring, and incident reporting.

  • CE marking. High-risk AI systems must bear CE marking indicating conformity.

  • EU database registration. Public-facing high-risk AI systems must be registered in the EU database under Article 71.

  • Post-market monitoring. Article 72 requires ongoing monitoring of high-risk AI systems in deployment.

  • Serious incident reporting. Article 73 requires reporting of serious incidents involving high-risk AI systems.

  • General Purpose AI obligations. Articles 53 through 55 contain specific obligations for GPAI providers including technical documentation, copyright compliance, training data summaries, and (for GPAI with systemic risk) model evaluation and risk mitigation.

What AI companies need to do

If you build, deploy, or place on the market AI systems that process personal data of EU residents, you are subject to both GDPR and the AI Act.

The combined compliance work typically includes:

  • GDPR foundation. Lawful basis documentation, RoPA, DPIAs, DPO appointment if required, transfer mechanisms, data subject rights operational capability, privacy notices, data processing agreements with processors.

  • AI system classification. Categorize each AI system into prohibited, high-risk, limited risk, or minimal risk under the AI Act. Document the analysis.

  • AI Act compliance for high-risk systems. Technical documentation per Article 11, quality management system per Article 17, conformity assessment, CE marking, EU database registration, post-market monitoring, serious incident reporting, transparency to deployers and users.

  • AI Act compliance for limited risk systems. Transparency obligations under Article 50 including informing users they are interacting with AI and marking AI-generated content.

  • GPAI obligations if applicable. For foundation model providers, Article 53 obligations on technical documentation, copyright compliance, and training data summaries. For GPAI with systemic risk, additional Article 55 obligations.

  • Coordinated risk and impact assessment. Where GDPR DPIA and AI Act FRIA both apply, conduct as a single coordinated assessment covering both privacy and fundamental rights.

  • Transparency stack. Privacy notices for GDPR, AI interaction notices for AI Act Article 50, deployer information for high-risk systems.

  • Ongoing governance. AI governance committee, model lifecycle management, bias monitoring, accuracy monitoring, incident response covering both AI Act and GDPR.

Common patterns

Most AI companies should start with GDPR if they have not already. GDPR has been enforceable since 2018 and is the foundation for any compliance program. The AI Act builds on rather than replaces GDPR.

For AI Act work, the urgency depends on your AI system's risk classification:

Prohibited systems. Stop immediately. Prohibited practices have been illegal since February 2, 2025.

GPAI providers. Compliance was required by August 2, 2025. If you provide foundation models, this is current and overdue if not addressed.

High-risk systems. August 2, 2026 deadline remains in force as of mid-2026. Most companies need 6 to 12 months of work to be ready. If you have not started, this is urgent.

Limited risk systems. Article 50 transparency obligations apply August 2, 2026. This is relatively limited work but should be addressed.

Minimal risk systems. No specific obligations beyond voluntary codes of conduct.

GPAI legacy models. August 2, 2027 deadline for compliance of GPAI models placed on the market before August 2, 2025.

Who handles each

GDPR work is typically led by a DPO (full-time, fractional, or outsourced).

AI Act work is typically led by an AI governance function, which can be the DPO, a dedicated AI governance officer, or a combined function. The AI Act does not create a specific AI Act Officer role analogous to the DPO.

For practical purposes, most tech companies coordinate AI Act and GDPR work under a combined privacy and AI governance function, often with the DPO leading and AI-specific specialists engaged for technical conformity assessment and quality management work.

How Engage Compliance fits

Engage Compliance covers both GDPR and EU AI Act work for technology companies, with deeper depth on GDPR and growing depth on AI Act specifically.

We typically lead GDPR work as fractional DPO and coordinate AI Act compliance alongside, working with the company's engineering and product teams on AI system classification, technical documentation, and operational governance.

For complex high-risk AI Act conformity assessments, we coordinate with specialist conformity assessment bodies and AI safety engineering firms.

Get started

If you are an AI company evaluating combined GDPR and AI Act compliance, book a consultation. We will give you a realistic assessment of what's required for your specific AI systems and the right sequence of work.