Annex III Explained — 8 Categories That Make Your AI High-Risk
Annex III of the EU AI Act is the list that decides whether your SaaS is high-risk. Article 6(2) says: if the use case of your AI system falls into any of these eight categories, you are high-risk — unless you qualify for the narrow Article 6(3) derogation.
This post walks all eight categories in order, with the same structure for each: what the EU text says, real SaaS examples, what obligations that triggers, and a "you're probably NOT in this category if…" sanity check. The goal is to spare you a week of LinkedIn panic.
If you haven't run the decision-tree yet, start with the 10-question checklist — it tells you whether Annex III applies at all before you read a 3,000-word explainer on it.
What Annex III actually is
Annex III is a list of use cases, not a list of AI technologies. The EU wasn't trying to ban LLMs or computer-vision models generically — those are tools, and the Act is mostly tool-agnostic. Instead, the drafters identified eight contexts where AI decisions have an outsized impact on individual rights: access to jobs, credit, education, justice, and so on.
The classification logic is: "Does this AI system make decisions — or materially influence decisions — in one of these eight contexts?" If yes, high-risk. If no, you're probably in limited or minimal.
Two clarifications before we dive in:
- It doesn't matter how the AI works. A deterministic rules engine, a fine-tuned LLM, and a 2012-era logistic regression are treated identically if they're used for, say, CV screening. The Act targets risk-to-individuals, not technical sophistication.
- "Used in" doesn't always mean "sold for". If you're a general-purpose LLM provider and your customer uses your API to build a CV-screening feature, that customer becomes the "provider" of a high-risk system. You (the LLM vendor) have obligations under Chapter V (GPAI), but you're not automatically high-risk for the downstream use.
1. Biometrics
Remote biometric identification, biometric categorisation according to sensitive attributes, emotion recognition.
What it means. Any system that uses physical or behavioural characteristics — face, fingerprint, voice, iris, gait, keystroke dynamics — to identify a specific person (one-to-many), categorise them into a sensitive group (gender, ethnicity, disability, religion), or infer emotional state.
SaaS examples.
- Identity-verification SaaS that matches a live selfie to a passport photo (Onfido, Veriff)
- Voice-biometric auth for call centres
- Face-recognition-based physical access control
- Emotion-detection tools used in call centres ("sentiment scoring" from voice/video)
Obligations triggered. Full Articles 9–15 stack — but there's a twist. Article 43 requires third-party notified-body conformity assessment (Annex VII), not the self-certification path available to other Annex III systems. That's significant: allow ~4–9 months for notified-body review.
You're probably NOT in this category if… you do 1-to-1 verification on the user's own device (e.g. Face ID unlocking a phone). The Act distinguishes server-side identification against a database of enrolled individuals from local verification. The former is high-risk; the latter generally isn't.
Special note on emotion recognition. Emotion recognition is prohibited (Article 5, not high-risk) in workplaces and educational institutions. It's only Annex III high-risk when used outside those contexts — for example, in entertainment or driver-drowsiness systems.
2. Critical infrastructure
Safety components of critical digital infrastructure, road traffic, or the supply of water, gas, heating and electricity.
What it means. AI used as a safety component — not as a back-office analytics tool — in critical infrastructure. "Safety component" is defined in Article 3: a component whose failure endangers health, safety, or the functioning of the infrastructure.
SaaS examples.
- Grid-balancing AI that controls energy dispatch
- AI-driven traffic signal optimization at the network level
- Intrusion-detection AI guarding a critical digital service where its failure would cause a widespread outage
Obligations triggered. Full Articles 9–15 stack. Self-certification path under Annex VI is usually available.
You're probably NOT in this category if… your AI is used inside a utility for office productivity, customer support, or back-office analytics. A CRM with AI tagging that happens to be sold to a water company is not a safety component. The test is whether failure of your AI causes a safety or service incident, not whether the buyer is classified as critical infrastructure.
3. Education and vocational training
AI determining access to education, admission decisions, evaluating learning outcomes, monitoring cheating, assigning students to institutions.
What it means. AI that shapes an individual student's path — who gets in, what mark they get, whether they're flagged for cheating.
SaaS examples.
- Admissions-scoring tools (common in US ed-tech, now coming to EU)
- Automated essay grading
- Proctoring software with cheating-detection AI (ProctorU, Respondus)
- Adaptive learning platforms that route students to different content based on ability scoring
- Scholarship-allocation systems
Obligations triggered. Full stack, plus Article 27 Fundamental Rights Impact Assessment if deployed by public bodies.
You're probably NOT in this category if… your AI helps teachers with admin tasks (lesson-plan generation, gradebook search) without making or materially influencing a decision about a specific student. A ChatGPT-powered feedback tool that a teacher uses as a draft is typically out of scope under Article 6(3) — the teacher still does the assessment.
The Article 6(3) derogation is strongest here. If your AI only detects patterns without influencing decisions, or if it performs a preparatory task that a human then completes, you may not be high-risk. Document your reasoning.
4. Employment, workers management, access to self-employment
Recruitment, candidate selection, task allocation, performance monitoring, promotion, termination, activity evaluation.
What it means. AI that affects whether someone gets hired, promoted, fired, or graded at work. This is the single most common category SaaS lands in.
SaaS examples.
- ATS systems with AI matching or "fit scoring" (Greenhouse, Lever, Workable — any that enabled AI features)
- CV-parsing + ranking tools
- Video-interview analysis (HireVue)
- Skills-assessment platforms that shortlist (Pymetrics, Plum)
- Gig-work task allocation (Uber, Deliveroo — the routing/bonus algorithms)
- Employee monitoring with productivity scoring
- Performance-management SaaS with AI evaluation or termination recommendations
Obligations triggered. Full stack. Deployers must also inform workers before deployment (Article 26(7)) and cannot ignore worker-council consultation rules under national labour law.
You're probably NOT in this category if… your AI is used for writing job descriptions, drafting offer letters, or scheduling interviews — admin tasks that don't change which candidate is actually assessed. The line moves when AI output changes which applications a human ever sees.
The trap: "A human still signs off" is not an escape hatch. If the AI produces a ranked list and the human only reviews the top 10%, the AI has materially influenced the decision. That's still high-risk.
5. Access to essential private and public services
Credit scoring, insurance pricing tied to individuals, emergency dispatch, social-benefit eligibility, healthcare triage and allocation.
What it means. AI that decides whether an individual can access money, insurance, emergency help, benefits, or healthcare.
SaaS examples.
- Credit-scoring APIs (Zilch, Klarna's underwriting, Upstart)
- Insurance-pricing models that produce individual quotes
- Fraud-detection systems that block transactions or freeze accounts (Stripe Radar sits on the edge — the Commission's ongoing guidance leans "high-risk" when it affects consumer access to payments)
- Life/health-insurance underwriting
- Emergency-call triage systems (999/112 operators)
Obligations triggered. Full stack, plus Article 27 FRIA when deployed by public bodies.
You're probably NOT in this category if… your AI is used for marketing-audience segmentation, dynamic pricing across an entire customer base (not individually tailored), or internal fraud analytics that don't block transactions. The test is whether an individual is denied access.
6. Law enforcement
Individual risk assessment, polygraph-style truth detection, evidence evaluation, crime prediction, profiling of suspects or victims.
What it means. AI used by law enforcement agencies to assess individuals. Most SaaS products will never touch this.
SaaS examples.
- Predictive-policing tools (PredPol, HunchLab)
- AI-assisted evidence analysis (deepfake detection in court contexts)
- "Threat scoring" for individuals
Obligations triggered. Full stack, plus registration in a restricted-access subsection of the EU database (Article 49(4)) rather than the public section. Article 27 FRIA is mandatory.
You're probably NOT in this category if… you sell general cybersecurity software to corporate SOCs. Law-enforcement here means a public authority exercising police powers, not a private SOC investigating a breach.
7. Migration, asylum and border control
Polygraph-style systems, risk assessment of migrants/asylum seekers, document verification at borders, examination of asylum or visa applications.
What it means. Narrow, specific, and almost exclusively sold to governments.
SaaS examples.
- iBorderCtrl-style "lie detector" border tools
- Automated triage of asylum applications
- Document-authenticity scanners at border crossings
Obligations triggered. Full stack + Article 27 FRIA + restricted EU database entry.
You're probably NOT in this category if… you're a B2B SaaS selling to anyone other than a national border authority or migration ministry. If you have to ask, you're not in this category.
8. Administration of justice and democratic processes
AI assisting a judicial authority in researching facts, interpreting law, applying law to facts. Also: systems intended to influence voting or election outcomes.
What it means. AI used in court decisions, sentencing support, or democratic processes (elections, referendums).
SaaS examples.
- Legal-research tools used by judges to interpret precedent (some LexisNexis and Westlaw features)
- AI-assisted sentencing tools (COMPAS-style, highly controversial)
- Election-integrity tools if used to influence voting behaviour (content targeting, micro-persuasion)
Obligations triggered. Full stack.
You're probably NOT in this category if… you sell legal-research SaaS to law firms (not courts) or to in-house legal teams. The category is specifically about AI assisting judicial authorities — sitting judges, sentencing panels, tribunals. Private lawyers using AI to draft briefs are out of scope.
What about the Article 6(3) exception?
Even if your use case lands in one of the eight categories above, you might escape high-risk classification. Article 6(3) lets you out if — and only if — your AI system:
- Performs a narrow procedural task, or
- Improves the result of a previously completed human activity, or
- Detects decision-making patterns without replacing or influencing a human assessment, or
- Performs a preparatory task to an assessment relevant to Annex III
AND — this is the kicker — it does not profile individuals as defined in GDPR Article 4(4).
In practice, very few AI systems pass the "does not profile" test. An LLM that reads a CV and extracts years-of-experience is probably fine under (d). An LLM that ranks candidates by "fit score" is profiling, and doesn't get the derogation.
If you think you qualify for Article 6(3), document your reasoning in the technical file. National market-surveillance authorities will challenge this, and your documentation is the first thing they'll ask for.
The unified obligations list for high-risk systems
No matter which of the eight categories you land in, the obligations stack is the same (Articles 9–15, 17, 43, 47, 49, 72, 73 — summarised in the checklist post). The differences are:
| Category | Conformity path | FRIA required? | EU database |
|---|---|---|---|
| Biometrics | Third-party (Annex VII) | If public deployment | Public |
| Critical infrastructure | Self (Annex VI) | If public deployment | Public |
| Education | Self (Annex VI) | If public deployment | Public |
| Employment | Self (Annex VI) | If public deployment | Public |
| Essential services | Self (Annex VI) | If public deployment | Public |
| Law enforcement | Self (Annex VI) | Yes | Restricted |
| Migration/border | Self (Annex VI) | Yes | Restricted |
| Justice & democratic processes | Self (Annex VI) | If public deployment | Public |
Biometrics is the outlier that triggers a notified body. Every other category lets you self-certify, which is faster and cheaper (~€15–50k instead of €60–200k), but still not trivial.
When does all of this apply?
August 2, 2026 for Annex III categories (Articles 6–27, the bulk of the regulation). The separate Annex I category (AI as a safety component of regulated products) fires a year later, on August 2, 2027. GPAI-model obligations and penalty provisions already apply since August 2025. Here's what to do in the next 90 days if you just realised you're high-risk.
A last sanity check
If you read this post and aren't sure which category you fit — or whether you fit at all — run the free classifier. It takes 3 minutes, produces a tier and reasoning, and is accurate enough to brief a lawyer without starting from zero.
The most expensive mistake we see is not "I didn't realise I was high-risk" — it's "I realised I was high-risk in July 2026 and I have three weeks." Every Annex III obligation compounds. Technical documentation alone is 30–50 pages, done properly. Start now, finish by May 2026, leave a buffer. Then sleep.
Automate what this post explains.
Inventory your AI systems, classify risk, and generate the documents you'd otherwise be writing by hand. 14-day free trial. No credit card.
EU AI Act Compliance Checklist for SaaS (2026 Edition)
30-step EU AI Act compliance checklist for SaaS founders. Risk tiers, deadlines, documentation, and a free PDF download. Updated 2026.
The AI Act Vendor Questionnaire: What to Ask Your AI Providers (and the Red Flags)
If you deploy a third-party AI system, Article 26 makes you responsible for verifying your provider. Here's the questionnaire you should be sending — 25 questions across 6 categories, with the red-flag answers.
DPIA Template for AI Systems: A Plain-English Walkthrough (GDPR Article 35 + AI Act Article 27)
When you need a DPIA for an AI system, what Article 35(7) actually requires, and how to combine it with a Fundamental Rights Impact Assessment so you write the document once instead of twice.