complair.

EU AI Act Compliance Checklist for SaaS (2026 Edition)

LF Liviu Florin Sirghea 11 min read

TL;DR — If your SaaS ships any AI feature and sells in Europe, the EU AI Act applies to you. Most SaaS companies are Deployers of limited-risk systems, not Providers of high-risk ones — but misclassifying it costs up to €35M or 7% of global revenue. This 30-item checklist walks you through inventory, classification, documentation, and ongoing obligations. A printable PDF version is at the bottom.

The EU AI Act (Regulation (EU) 2024/1689) is the world's first horizontal AI law. It entered into force on 1 August 2024. Obligations phase in over three years. By August 2026, every SaaS company selling into the EU needs a documented compliance posture — or they're betting their company on regulators looking the other way.

This article gives you the checklist. It does not give you legal advice. If you're high-risk, you need counsel. If you're limited-risk (you probably are), you mostly need a system to track 30 specific things — and that's what this checklist is for.

The 4 Risk Tiers — Plain English Version

The Act sorts AI systems into four buckets based on harm potential:

Tier What it is Examples What you have to do
Unacceptable risk Banned outright Social scoring, real-time biometric ID in public, subliminal manipulation Don't build it. (Article 5)
High risk Permitted with strict obligations AI for hiring, credit scoring, medical devices, critical infrastructure, education grading, law enforcement CE marking, conformity assessment, technical documentation, human oversight, post-market monitoring (Annex III)
Limited risk Permitted with transparency obligations Chatbots, deepfakes, emotion recognition, general-purpose AI used in apps Tell users they're interacting with AI; label AI-generated content (Article 50)
Minimal risk No obligations Spam filters, recommendation engines without manipulation, AI-enabled video games None — but document your classification anyway

~85% of SaaS companies fall into limited-risk or minimal-risk. If you're using GPT-5, Claude 4.7, or a similar foundation model inside your product to summarise, classify, or generate content, you are almost certainly a Deployer of a limited-risk system. Your obligations are real but light.

The trap: assuming you're minimal-risk when one of your features sneaks into Annex III. Hiring AI is the most common surprise — if you sell HR software with any candidate-ranking, scoring, or filtering logic, you are high-risk and the obligations are heavy.

Provider vs Deployer — Which One Are You?

The Act distinguishes two main roles:

  • Provider: You develop and place an AI system on the market under your own name (e.g., OpenAI, Anthropic, your fine-tuned proprietary model).
  • Deployer: You use an AI system in your product but didn't build the model (e.g., your SaaS calls Claude's API to summarise documents — you are a Deployer of Claude, not a Provider).

You can be both — Provider of your own classifier, Deployer of someone else's foundation model. Most SaaS companies are pure Deployers. Deployers have lighter obligations than Providers, but they still need to:

  1. Use the system as instructed by the Provider
  2. Inform people when they're interacting with AI
  3. Keep logs (where technically possible)
  4. Conduct fundamental rights impact assessment if high-risk
  5. Cooperate with authorities

If you fine-tune an open-source model and ship it as the core of your product, you may have crossed into Provider territory. This is legally consequential — get advice.

The Deadline Timeline

The Act phases in. Mark your calendar:

  • 2 February 2025 — Prohibited practices ban active. AI literacy obligations active for all staff dealing with AI.
  • 2 August 2025 — Governance rules active. GPAI Provider obligations active. National authorities designated.
  • 2 August 2026Most SaaS obligations active. Limited-risk transparency, high-risk obligations for systems in Annex III, codes of practice in force.
  • 2 August 2027 — Full applicability for high-risk AI used as safety components in regulated products.

If you're reading this in May 2026, you have 3 months to be ready. If you're already past 2 August 2026, you're already non-compliant for some obligations. Either way: start the checklist today.

The Checklist — 30 Items in 5 Sections

Section A — AI System Inventory (Items 1–6)

You can't comply with what you don't know you have. Most SaaS companies underestimate their AI footprint by 3–5x because they forget about embedded vendor AI (Intercom Fin, Zendesk Resolution Bot, HubSpot Breeze, etc.).

  • [ ] 1. List every AI feature in your product, including sub-features (e.g., "draft email" inside a CRM counts as one system).
  • [ ] 2. List every third-party AI service your product calls (OpenAI, Anthropic, Cohere, Mistral, Hugging Face Inference, AWS Bedrock, Vertex AI).
  • [ ] 3. List every AI tool your team uses internally that touches customer data (Notion AI, Cursor, Copilot, Linear AI, Granola, Otter.ai).
  • [ ] 4. For each, document: input data type, output data type, decision authority (advisory vs. autonomous), human-in-the-loop status.
  • [ ] 5. Identify which systems process personal data (triggers GDPR Article 22 alongside the AI Act).
  • [ ] 6. Identify which systems make decisions with "legal or similarly significant effects" on individuals (hiring, credit, eligibility, pricing).

Section B — Risk Classification (Items 7–12)

For each AI system in your inventory, classify against Annex III. The classification determines everything downstream.

  • [ ] 7. Walk each system through the Article 6 decision tree — biometric ID, critical infrastructure, education, employment, essential services, law enforcement, migration, justice/democracy.
  • [ ] 8. If any system lands in Annex III, check the Article 6(3) carve-out — does it perform a "narrow procedural task" or improve previous human work without replacing it? If yes, it may exit high-risk.
  • [ ] 9. Document the classification decision in writing with reasoning. (When auditors ask "why limited-risk?", you need a paper trail.)
  • [ ] 10. Have a second reviewer (legal, ops, founder) sanity-check classifications quarterly.
  • [ ] 11. For limited-risk systems, confirm Article 50 transparency obligations apply — chatbot disclosure, AI-generated content labelling, deepfake watermarking.
  • [ ] 12. Re-classify whenever a system materially changes (new training data, new capability, new use case).

Section C — Documentation (Items 13–20)

Documentation is the bulk of the work. The good news: most of it overlaps with what GDPR already requires you to maintain. Reuse aggressively.

  • [ ] 13. AI policy — internal document covering acceptable use, prohibited use, vendor approval process, incident reporting. (1–3 pages.)
  • [ ] 14. AI register — searchable inventory from Section A, kept current. (Spreadsheet or tool.)
  • [ ] 15. Risk assessment per system — at minimum: hazard, likelihood, impact, mitigation, residual risk. (Template available below.)
  • [ ] 16. Data governance log — for high-risk: training data sources, bias testing results, lineage. For limited-risk: which data flows through the system.
  • [ ] 17. Human oversight specification — who reviews AI outputs, when, with what authority to override.
  • [ ] 18. Transparency notice — public-facing statement (in your privacy policy or a dedicated AI notice) explaining what AI you use, for what, and how users can object.
  • [ ] 19. Incident response plan — for AI-specific incidents: hallucination causing harm, biased output, model drift, prompt injection.
  • [ ] 20. Vendor due diligence file — for every third-party AI provider, a signed DPA + AI Act addendum + their published technical documentation.

Section D — Operational Controls (Items 21–26)

Documentation without operational controls is theatre. Auditors look at logs, not policies.

  • [ ] 21. AI literacy training — every employee using AI for work (Article 4). 30-minute training is enough; document attendance.
  • [ ] 22. Logging — capture inputs and outputs for AI systems where technically feasible. For LLM-based features, log the prompt, retrieved context, and response. Retain for 6+ months.
  • [ ] 23. Human-in-the-loop checkpoints — for any decision affecting a user, require human review or document why automation is acceptable.
  • [ ] 24. Bias and fairness testing — at least annually for any system that scores, ranks, or filters people. Quarterly for high-risk.
  • [ ] 25. Drift monitoring — track model performance over time. Set alert thresholds.
  • [ ] 26. User opt-out mechanism — for any AI feature affecting individual users, provide a documented way to request human review.

Section E — Vendor and Buyer Obligations (Items 27–30)

Your buyers and your vendors are watching. This is where compliance turns into commercial leverage.

  • [ ] 27. Vendor questionnaires — proactively send AI Act due-diligence questionnaires to your foundation-model providers and embedded-AI vendors. (Free template at end of article.)
  • [ ] 28. Buyer questionnaires — maintain a canonical answer library for CAIQ-Lite, SIG, AI-specific buyer questionnaires. Answer once, reuse forever.
  • [ ] 29. DPA + AI Act addendum — update your data processing agreement template with AI-specific clauses (training-on-customer-data ban, model-output ownership, sub-processor approval).
  • [ ] 30. Public AI page — at yourcompany.com/ai, list every AI feature, every vendor, your transparency posture. Expect enterprise buyers to require this by Q3 2026.

Common Mistakes SaaS Founders Make

After working with 50+ SaaS teams on AI Act readiness, the same mistakes recur:

  1. Confusing AI Act with GDPR. They overlap but cover different ground. GDPR is about personal data. AI Act is about AI systems regardless of data type. You need both.
  2. Assuming foundation-model providers cover you. They don't. OpenAI's compliance covers OpenAI. You're still a Deployer with your own obligations.
  3. Confusing "we're not high-risk" with "we have no obligations." Limited-risk has real transparency obligations under Article 50.
  4. Forgetting internal AI use. Notion AI summarising customer support tickets is a third-party AI processing personal data. It's in scope.
  5. Treating compliance as one-and-done. This is a continuous monitoring problem. Every model update, every new feature, every new vendor changes the picture.
  6. Buying Vanta or Drata thinking it covers AI Act. It mostly doesn't. They're optimised for SOC 2. AI Act needs a different model.

What to Do This Week

If you're starting from zero:

  1. Block 2 hours on your calendar. Open a spreadsheet.
  2. Run Section A items 1–6. List everything.
  3. For each item, assign a tier guess (high / limited / minimal).
  4. Send this article to one team member and have them sanity-check the tier assignments.
  5. Pick one system you're unsure about. Walk it through the Article 6 decision tree carefully.

That gets you 70% of the way. The remaining sections are project-managed work over 4–8 weeks.

Free Resources

I built Complair specifically because I went through this checklist myself and wished it didn't suck. If you want help:

If you'd rather have a tool that runs the whole checklist, generates the docs, and tracks your obligations across all 30 items, that's what Complair does. We're in design-partner mode — first 10 EU SaaS teams get free Scale tier for 6 months.

FAQ

Q: Is the EU AI Act in force right now? Yes. It entered into force on 1 August 2024. Obligations phase in through August 2027. Most SaaS-relevant obligations land 2 August 2026.

Q: My company is based in the US. Does the AI Act apply? If you sell to EU users or your AI system's output is used in the EU, yes. Article 2 has extraterritorial reach similar to GDPR.

Q: What are the penalties? Up to €35M or 7% of global annual turnover for prohibited practices. Up to €15M or 3% for high-risk violations. Up to €7.5M or 1% for incorrect information to authorities. (Article 99.)

Q: We use ChatGPT internally. Are we in scope? Yes — as a Deployer. Your obligations include AI literacy training (Article 4), informing employees about AI use, and ensuring no prohibited practices. They're light but real.

Q: We're pre-revenue. Do we still need to comply? Yes. Penalties scale to revenue but the obligations are absolute. Better to set up the system now while you have 3 systems to inventory than later when you have 30.

Q: How does the AI Act interact with GDPR? They're complementary. GDPR governs personal data. The AI Act governs AI systems. If your AI processes personal data (most do), both apply. Article 22 GDPR (automated decision-making) and AI Act high-risk obligations overlap heavily.

Q: Do I need a CE mark? Only for high-risk AI systems. Limited-risk systems do not need CE marking, but they do need transparency disclosures.

Share X LinkedIn Email
Complair

Automate what this post explains.

Inventory your AI systems, classify risk, and generate the documents you'd otherwise be writing by hand. 14-day free trial. No credit card.

Related reading