complair.

The EU AI Act for SaaS Companies: A Plain-English Field Guide

CT Complair team 10 min read

If you sell a SaaS product into the European Union and it touches AI in any way β€” your own model, an OpenAI or Anthropic API call, a third-party tool embedded in your app, or a "smart" feature you bolted on β€” the EU AI Act applies to you. The questions are how much and by when.

This post is the field guide we wish someone had handed us when we first read Regulation (EU) 2024/1689. It links to the deeper explainers below, but you can read just this one and walk away with a defensible mental model.

Why SaaS is a special case under the AI Act

The Act was written with a manufacturing mindset β€” "products," "placing on the market," "CE marking," "notified bodies." SaaS doesn't fit that mould cleanly, which creates two problems:

  1. You're almost certainly playing two roles at once. Most SaaS companies are providers of an AI feature in their own product (they "place it on the market") and deployers of third-party AI tools internally (Copilot, ChatGPT, Gong, Pylon β€” pick your stack). The two roles carry different obligations and different fine ceilings, and you owe both.
  2. "Significant change" restarts the clock. Article 111(2) gives systems already on the market before August 2, 2026 a one-year grace period β€” unless you make significant changes. SaaS ships every two weeks. In practice, that grace period doesn't exist for you. The real deadline is August 2, 2026 for everything in scope.

If you absorb nothing else from this post, absorb this: the AI Act isn't one regulation you comply with once. It's a regime that applies, per-feature, every time you ship.

The 5 questions every SaaS founder needs to answer

Before you do anything else, run these five in order. They take about 30 minutes and they determine the size of your problem.

1. Are you a provider, a deployer, or both?

  • Provider: you build, train, or substantively brand an AI feature in your product. If your product page says "AI-powered X" and X is your differentiator, you're a provider for that feature.
  • Deployer: you use an AI system built by someone else in a professional context. ChatGPT writing your blog drafts? You're a deployer.

Most SaaS companies are both. You're a provider for whatever makes you special, and a deployer for everything internal. Provider obligations are heavier. Article 26 covers what deployers owe; the provider obligations live in Articles 9–17.

2. What's the risk tier of each AI system?

Four tiers:

  • Unacceptable (Article 5) β€” banned outright. Social scoring, manipulative subliminal techniques, untargeted facial scraping, emotion recognition in workplaces and schools. If you're doing any of this, stop today; it's been illegal since February 2025.
  • High-risk (Article 6 + Annex III) β€” eight categories: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice and democracy. The full list is in the Annex III explainer; the decision tree is here.
  • Limited risk (Article 50) β€” chatbots, generative content, deepfakes, emotion recognition outside high-risk contexts. The obligation is transparency: tell people they're interacting with AI. Most SaaS chatbots are limited risk, not high.
  • Minimal risk β€” everything else. No AI Act obligations (you may still owe under GDPR, the DSA, etc.).

Run the free classifier to get a tier for each of your systems with reasoning you can hand to a lawyer.

3. Are you in scope at all?

The AI Act applies to providers and deployers established outside the EU if their AI system's output is used in the EU. This is the territorial scope rule (Article 2(1)). So no β€” you can't dodge it by being a Delaware C-Corp; if your SaaS has EU customers, you're in scope.

The narrow exception: research, military, and pure personal use are excluded. Open-source models distributed under a free license get partial relief β€” but only if you're not commercialising them.

4. Is your data also under GDPR?

If your AI processes personal data β€” and most SaaS does β€” both GDPR and the AI Act apply, simultaneously and stackably. Penalties stack: GDPR up to €20M or 4% of global turnover, plus AI Act up to €35M or 7%. The combined ceiling is roughly 11% of global turnover for the worst-case violation.

Practically: every high-risk AI system handling personal data needs both a DPIA under GDPR Article 35 and a FRIA under AI Act Article 27. Article 27 explicitly allows you to combine them into one document, which Complair does by default.

5. Do your customers ship to people with disabilities?

If you're a B2C SaaS or a B2B SaaS with a consumer-facing surface, the European Accessibility Act (EAA) has been enforceable since June 28, 2025. The technical standard is EN 301 549, which incorporates WCAG 2.1 Level AA. Fines vary by member state β€” Germany €100k, France €250k, Spain €300k. Different regulator, different deadline, but the same product surface, so it's worth handling alongside the AI Act work.

What August 2, 2026 actually means

The Act applies in waves. Here's the calendar:

Date What applies
Feb 2, 2025 Article 5 (prohibitions) + Article 4 (AI literacy)
Aug 2, 2025 Chapter V (General-Purpose AI models) + Article 99 (penalties)
Aug 2, 2026 Articles 6–27 (high-risk Annex III) + Article 50 (transparency)
Aug 2, 2027 Annex I (AI as safety component of regulated products)

On August 2, 2026, every Annex III high-risk system you place on the EU market needs to satisfy the full Chapter III stack from day one. There is a Digital Omnibus rumour floating around Brussels that may delay parts of this β€” we cover why you should not plan around it.

What you actually owe, by role

If you're a provider of a high-risk system

The full thirteen-item stack from Articles 9–49: risk management, data governance, technical documentation matching Annex IV, automatic logging, deployer-facing transparency, human-oversight design, accuracy/robustness/cybersecurity testing, a quality management system, conformity assessment, EU declaration of conformity, CE marking, EU database registration, post-market monitoring, and a serious-incident reporting process.

Done from a template, this is roughly 4–6 weeks of work for the first system and 1–2 weeks per additional system. Done from scratch, it's a quarter and a hired consultant.

If you're a deployer of a high-risk system

Article 26 is shorter but non-negotiable: use the system within the provider's instructions, assign trained human oversight, retain logs for at least 6 months, inform workers and affected individuals, and report serious incidents to your national competent authority. Plus a Fundamental Rights Impact Assessment (Article 27) if you're public-sector.

If you're a provider or deployer of a limited-risk system

Just transparency. Article 50 says: tell users they're talking to AI, label AI-generated content, and disclose deepfakes. The text of the disclosure is up to you β€” Complair generates one from your inventory, but you can write it yourself in an afternoon.

If you're minimal-risk

Nothing. But run the classifier before you celebrate; the most common compliance mistake we see is a SaaS team confidently classifying themselves minimal when they're actually limited or β€” worse β€” high.

A phased plan that actually finishes by August

The full sequence is in our 90-day playbook, but the short version:

  • Weeks 1–2: inventory every AI system, classify each one, identify your role per system.
  • Weeks 3–6: write the technical documentation for each high-risk system. This is the slow part. Use Annex IV as a literal table of contents.
  • Weeks 7–8: stand up the operational stack β€” logging, human oversight in the UI, transparency notices, vendor questionnaires for your providers, risk register.
  • Weeks 9–10: register high-risk systems in the EU database, sign your EU declaration of conformity, publish your trust page.
  • Buffer: the last 4 weeks are for the things you didn't anticipate. There will be things you didn't anticipate.

If you're starting today (April 19, 2026), you have ~15 weeks to August 2 β€” tight but workable. If you're starting in June, you'll need help.

What this costs

Three honest numbers:

  • DIY with a tool: €49–299/month for the workspace + 60–120 hours of founder/engineer time. ~€2k–6k all-in if you cost your time at €50/hour.
  • DIY with a one-time consultant review: above + €3k–8k for a 5-day legal review at the end. Worth it if you have a paying enterprise customer who'll ask.
  • Outsourced to a Big-4-style consultancy: €25k–80k. Generally overkill for SMB SaaS; the consultants will hand you a template and bill you for assembling your own answers.

The right answer for almost every team under €5M ARR is DIY with a tool, plus one external review for the high-risk systems.

What we'd skip

A few things people stress about that you genuinely don't need:

  • A separate AI governance committee. For teams under 50, your existing security/compliance lead owning this is fine.
  • Multilingual documentation. English is accepted by every market-surveillance authority in practice.
  • Perfect model explainability per prediction. Article 13 requires transparency to deployers β€” documented expected behaviour and accuracy bounds is enough.
  • ISO 42001 certification before August. Helpful eventually; not required.

Where to go next

The AI Act isn't going to be enforced lightly, and it isn't going to be delayed. The Commission staffed up, the national authorities are publishing enforcement plans, and enterprise procurement teams are already asking for the EU declaration of conformity in RFPs. You don't have to be perfect by August 2 β€” but you do need to have started.

Share X LinkedIn Email
Complair

Automate what this post explains.

Inventory your AI systems, classify risk, and generate the documents you'd otherwise be writing by hand. 14-day free trial. No credit card.

Related reading