Is Your SaaS High-Risk Under the EU AI Act? A Practical Checklist
Most SaaS founders read "EU AI Act" and instinctively skip to the penalty page — €35M or 7% of global turnover is hard to ignore. Then they read the 144 pages of regulation and give up. Then they skim a LinkedIn thread and decide either (a) "We're just a chatbot, we're fine" or (b) "We're doomed, we need a DPO and three consultants."
Both of those takes are usually wrong. The question isn't "does the AI Act apply to us?" — it almost certainly does if you sell into the EU. The question is which tier you land in, and most SaaS products land in limited or minimal risk, where the obligations are light.
This post is the 15-minute version. A yes/no checklist that maps common SaaS features to Article 6 and Annex III, followed by what to do if any answer comes back yes.
First, the vocabulary you need
The EU AI Act sorts every AI system into one of four tiers:
- Unacceptable risk (Article 5) — banned outright. Social scoring, subliminal manipulation, real-time biometric ID in public spaces, emotion recognition in workplaces and schools. If you're doing any of these knowingly, stop reading and call a lawyer.
- High risk (Article 6 + Annex III) — allowed, but you have to meet ~15 obligations before going to market. Risk management, technical documentation, logging, human oversight, conformity assessment, post-market monitoring. The bulk of this article is about determining whether you land here.
- Limited risk (Article 50) — chatbots, deepfake generators, emotion-recognition systems outside banned contexts. One obligation: tell users they're interacting with AI, or that the content is AI-generated.
- Minimal risk — everything else. No AI-Act-specific obligations. (GDPR still applies.)
High-risk classification has two pathways:
- Article 6(1) — your AI is a safety component of a product already covered by EU product-safety law (medical devices, machinery, toys, vehicles). This applies to ~2% of SaaS. You'd know.
- Article 6(2) + Annex III — your AI is used in one of eight "use case" categories the EU flagged as inherently risky (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice). This is where most SaaS lands if it lands high-risk at all.
The 10-question checklist
Go through these with your actual product in front of you. If the answer to any is yes, you are very likely providing or deploying a high-risk AI system under Annex III.
1. Do you screen, rank, or filter job candidates?
This includes CV parsing, resume ranking, "best-fit" scoring, automated rejection, skills matching against a job description, or anything that changes which applications a human recruiter actually sees.
- Annex III category: Employment, workers management, access to self-employment
- Keywords that trigger it: recruitment, hiring, screening, CV, resume, candidate, shortlist
- Example SaaS products: HireVue, Pymetrics, Textio, any ATS with "AI matching"
Yes = high-risk. This one is usually unambiguous.
2. Do you monitor, evaluate, or score employee performance?
Task allocation, productivity scoring, promotion/termination recommendations, activity monitoring that feeds a performance rating, automated warnings for "underperformance".
- Annex III category: Employment
- This catches internal-HR SaaS, sales-ops AI, and some customer-success tools.
Yes = high-risk, even if a human "signs off" at the end.
3. Do you influence access to credit, insurance, or financial services?
Credit scoring, loan eligibility, insurance pricing tied to individuals, fraud-risk scoring that blocks accounts, automated rejection of transactions on customer-facing flows.
- Annex III category: Access to essential private and public services
- Stripe Radar sits on the edge — blocking payments for fraud is generally treated as high-risk when it affects individuals' access to payments.
Yes = high-risk.
4. Do you use biometric data — face, fingerprint, voice, gait — to identify or categorise people?
Facial-recognition-based SSO, voice-biometric authentication, any identity-verification flow that matches a live face to a photo ID.
- Annex III category: Biometrics
- Note: authentication (is this the same person as last time?) is high-risk. Unlocking your own phone with your face on your device is typically out of scope. Server-side face-matching at scale is in scope.
Yes = high-risk, and Article 43 typically requires a third-party notified body for conformity assessment rather than the self-certification path available to other high-risk systems.
5. Do you determine access to, or grade performance in, education or training?
Admissions decisions, automated grading, proctoring/cheating detection, placement tests that feed scholarship or admission decisions, adaptive-learning systems that decide which material a student sees based on ability scoring.
- Annex III category: Education and vocational training
Yes = high-risk.
6. Are you used by public bodies to allocate benefits, emergency services, or healthcare?
Automated triage in an emergency call centre, eligibility decisions for social benefits, healthcare-allocation systems.
- Annex III category: Essential services
- Most B2B SaaS is not selling to municipalities, but if you are, a Fundamental Rights Impact Assessment (Article 27) is usually required before deployment.
Yes = high-risk.
7. Are you used in policing, border control, or judicial decision-making?
Crime-prediction models, risk scoring for individuals, asylum-application triage, legal-research tools that assist judges with fact interpretation.
- Annex III categories: Law enforcement / Migration / Justice
Yes = high-risk. (And if any of 1–3 of you are working on this — the public-procurement rules that layer on top of the AI Act will consume your roadmap.)
8. Do you generate synthetic content that could be mistaken for real content?
Deepfake tools, AI voice cloning, AI-generated images or text that can't be distinguished from human output at a glance. Standard content-generation tools fall here.
- Not high-risk. This is limited risk under Article 50. Obligation: mark AI-generated output in a machine-readable format (e.g. C2PA provenance tags) and disclose to users that content is synthetic.
9. Do you operate a chatbot, virtual assistant, or voice agent that interacts directly with end users?
- Not high-risk. This is limited risk under Article 50(1). Obligation: disclose that the user is interacting with an AI system unless it's obvious from context.
10. Do you moderate user-generated content, set dynamic pricing, or recommend products?
- Usually minimal risk. Content moderation and recommendation systems are not listed in Annex III. They may pick up obligations under the Digital Services Act and GDPR's automated-decision-making rules (Article 22), but the AI Act's high-risk obligations don't fire.
What if you answered "yes" to at least one?
Don't panic yet. Article 6(3) gives you a narrow exception — your system is not considered high-risk if it meets all of these:
- It performs a narrow procedural task, or
- It improves the result of a previously completed human activity, or
- It detects decision-making patterns without replacing or influencing the human assessment, or
- It performs a preparatory task for an assessment relevant to Annex III.
And — critically — it does not profile individuals.
This is a real off-ramp, but a narrow one. If your CV-screening tool filters by "has the word Python on the resume" and a recruiter still reads every application, you probably qualify. If it ranks candidates by a model-generated "fit score", you don't.
The Commission will publish guidance on Article 6(3) before the application date, but don't plan around a derogation you haven't verified. Read the Annex III categories in detail before banking on an exception.
If you're high-risk: what you actually have to do
From August 2, 2026 (that's the date on the calendar — here's our breakdown of what August 2 actually means), a high-risk system placed on the EU market needs:
- Risk management system (Article 9) — documented, iterative, lifecycle-long.
- Data governance (Article 10) — training/validation/test data is appropriate, representative, and documented.
- Technical documentation (Article 11 + Annex IV) — ~30 pages minimum, kept up to date.
- Automatic logging (Article 12) — events recorded over the system's lifetime, retained by deployers for ≥6 months.
- Transparency for deployers (Article 13) — instructions for use, expected accuracy, known limitations.
- Human oversight (Article 14) — designed into the UI so a human can intervene.
- Accuracy, robustness, cybersecurity (Article 15) — tested, documented, proportionate to risk.
- Quality management system (Article 17) — think ISO-9001 but for AI.
- Conformity assessment (Article 43) — self-certification (Annex VI) for most categories; third-party notified body (Annex VII) for biometrics.
- EU declaration of conformity + CE mark (Article 47).
- Registration in the EU database (Article 49).
- Post-market monitoring (Article 72) — plan + ongoing review.
- Serious incident reporting (Article 73) — 15 days, or 72 hours for "widespread" infringements.
If you're a deployer (you didn't build it, but you use it for one of the Annex III use cases), Article 26 gives you a shorter but non-trivial list: use according to instructions, assign trained human oversight, keep logs ≥6 months, inform workers and affected individuals, and — in public-sector deployments — run a Fundamental Rights Impact Assessment (Article 27).
A plain-English summary
For most B2B SaaS companies, the answer is one of:
- "We're high-risk." — Usually because you touch employment (1–2), credit/insurance (3), or biometrics (4). You now have ~15 months to stand up the obligations above. Start with technical documentation and logging; those are the two that generalise across every use case.
- "We're limited-risk." — You have a chatbot (9) or content generator (8). Your obligation is disclosure. Update your UI copy, ship.
- "We're minimal-risk." — Congratulations. You still have GDPR, the Digital Services Act (if you're a VLOP/VLOSE), and — if you sell to the public sector — a mountain of Article 6(3) documentation you'll still want to keep handy. But Articles 9–15 don't fire for you.
The biggest trap we see: SaaS companies assume "our AI is just text generation, we're fine" and miss the fact that their feature is used inside a high-risk process by their customer — at which point the customer needs the Annex IV documentation from you, and the conversation becomes uncomfortable.
If you want to walk through this with your actual product, the free AI Act classifier takes ~3 minutes and outputs a tier + reasoning you can hand to a lawyer. No signup.
And if you read all the way down here: the August 2, 2026 deadline is real, it hasn't moved, and the rumours of a Digital Omnibus delay are — as of April 2026 — still rumours. Plan like the deadline is real. Here's how to spend the next 90 days.
Automate what this post explains.
Inventory your AI systems, classify risk, and generate the documents you'd otherwise be writing by hand. 14-day free trial. No credit card.
EU AI Act Compliance Checklist for SaaS (2026 Edition)
30-step EU AI Act compliance checklist for SaaS founders. Risk tiers, deadlines, documentation, and a free PDF download. Updated 2026.
The AI Act Vendor Questionnaire: What to Ask Your AI Providers (and the Red Flags)
If you deploy a third-party AI system, Article 26 makes you responsible for verifying your provider. Here's the questionnaire you should be sending — 25 questions across 6 categories, with the red-flag answers.
DPIA Template for AI Systems: A Plain-English Walkthrough (GDPR Article 35 + AI Act Article 27)
When you need a DPIA for an AI system, what Article 35(7) actually requires, and how to combine it with a Fundamental Rights Impact Assessment so you write the document once instead of twice.