AI compliance for tech companies using AI in their products
Last updated: April 2026
The EU AI Act is here. Here's what it means for your company and how to get ahead of it.
If your product uses AI or machine learning, you have new compliance obligations under the EU AI Act, and these interact with your existing GDPR requirements in ways that most companies haven't fully mapped out yet.
Key takeaways
The EU AI Act entered into force August 2024 with obligations phased in through 2027 (some high-risk system obligations extend beyond August 2026)
Common tech company AI use cases range from "high risk" (employment, healthcare, credit scoring) to "limited risk" (certain chatbots, generative AI) to "minimal risk" (internal analytics, basic automation)
AI compliance integrates with your existing GDPR DPIAs and privacy governance
Your DPO has led AI compliance programs across 100+ organizations including companies at the forefront of AI development
EU AI Act risk levels
Unacceptable risk (banned): AI that manipulates behavior to cause harm, social scoring by governments, real-time biometric identification in public spaces for law enforcement (with limited exceptions). These prohibitions took effect February 2025.
High risk: AI used in employment (recruitment, performance evaluation), education, credit scoring, healthcare diagnostics, law enforcement, and critical infrastructure. These require conformity assessments, risk management systems, transparency, human oversight, and documentation. Most obligations apply from August 2026, with extended timelines for certain embedded systems.
Limited risk: AI systems that interact with users (chatbots), generate synthetic content (deepfakes, AI-generated text/images), or present as human when they are not. These primarily require transparency obligations (telling users they're interacting with AI or that content is AI-generated). Transparency obligations under Article 50 apply from 2 August 2026.
Minimal risk: Most other AI applications. No specific obligations, but good practice to follow AI governance principles.
How to determine your risk level
The classification depends on what your AI does, not how it works technically. Key questions:
Does your AI make decisions about employment, credit, education, or healthcare? → Likely high risk
Does your AI interact with users directly (chatbot, virtual assistant)? → Likely limited risk
Does your AI generate synthetic content? → Likely limited risk
Does your AI power internal analytics without directly affecting individuals? → Likely minimal risk
Most tech companies have AI that falls into multiple categories. A SaaS company might have a minimal-risk internal analytics tool AND a limited-risk customer-facing chatbot AND a high-risk AI feature used in HR decisions. Each gets classified separately.
What we do for AI compliance
AI risk classification: determining where your AI systems fall in the framework
AI risk assessments: evaluating the risks of your AI processing activities
AI governance frameworks: policies, procedures, and accountability structures
Documentation: technical documentation required for high-risk AI systems
Transparency implementation: user-facing disclosures and explanations
Data quality and bias assessments
Human oversight mechanisms
Integration with existing GDPR DPIAs (AI processing often triggers DPIA requirements)
EU AI Act timeline
1 August 2024: AI Act entered into force
2 February 2025: Prohibited practices and AI literacy obligations apply
2 August 2025: General-purpose AI (GPAI) obligations and governance rules apply
2 August 2026: Transparency obligations (Article 50) apply; most high-risk obligations apply
2 August 2027: Extended deadline for certain high-risk AI in regulated products
2 August 2028: Extended deadline for certain additional high-risk systems
FAQ
When does the EU AI Act take effect? The EU AI Act entered into force on 1 August 2024, with obligations phased in: prohibited practices from 2 February 2025, general-purpose AI obligations from 2 August 2025, transparency obligations from 2 August 2026, and most other obligations from 2 August 2026. Some high-risk AI systems in certain regulated sectors have extended timelines through 2027 and 2028.
Does the EU AI Act apply to US companies? Yes, if your AI systems are used in the EU or the output of your AI affects people in the EU. Similar to how GDPR applies based on where your users are, not where your company is based.
Do I need a separate AI compliance engagement? Not necessarily. If you're already a DPO client, AI compliance is covered within your retainer. AI assessments integrate with existing DPIAs and privacy governance. For standalone AI compliance projects, we offer project-based pricing.
What about AI and GDPR? GDPR already regulates AI in several ways: automated decision-making rights, DPIAs for high-risk processing, data minimization, purpose limitation, and transparency. The EU AI Act adds additional obligations on top. We cover both.
Is there an AI equivalent of a DPO? Not a formal legal requirement yet, though some companies are appointing AI governance officers. We handle AI compliance as part of our DPO role, which is the most practical approach for most tech companies.
What industries are most affected? Healthcare (AI diagnostics classified as high-risk), HR Tech (AI recruitment classified as high-risk), Fintech (credit scoring classified as high-risk), and any company using customer-facing AI (chatbots, recommendations classified as limited risk). See our industry pages: HealthTech, HR Tech, Fintech, AI Companies.
This page is general information, not legal advice. Exact obligations depend on your specific AI systems and jurisdictions.
Related pages