The Annex III Checklist: A 10-Question Test for Whether Your AI Is High-Risk
This is the short version. If you want the full explanation of each category and the rationale behind it, read the Annex III explainer. If you want a yes/no test you can copy into your engineering planning doc and fill in by lunch, this is it.
How to use this checklist
Run it once per AI system in your inventory. Not per product, not per team — per AI system. If you have a "smart suggestion" feature and a chatbot and a fraud scorer, you have three systems and you run the checklist three times.
A "yes" to any single question means the system is presumptively high-risk under Annex III of the EU AI Act. A "no" to all ten means it's not in Annex III, and you fall back to limited or minimal risk.
If you get a yes, jump to the Article 6(3) escape hatch at the end before assuming you're stuck with the full obligations.
The 10 questions
1. Biometric identification or categorisation
Does the system identify, verify, or categorise individuals based on biometric data — face, voice, fingerprint, iris, gait, behaviour patterns, or emotion?
→ Yes = Annex III §1. High-risk. Note: if the system identifies people remotely (CCTV-style) in real time in publicly accessible spaces, this is prohibited under Article 5, not just high-risk.
2. Critical infrastructure
Is the system used as a safety component in the management or operation of digital infrastructure, road traffic, or the supply of water, gas, heating, or electricity?
→ Yes = Annex III §2. High-risk. The bar is "safety component" — analytics dashboards generally don't qualify; control systems do.
3. Education and vocational training
Does the system: (a) determine access to or admission of people to education programs; (b) evaluate learning outcomes; (c) assess the appropriate level of education for individuals; or (d) detect prohibited behaviour during tests?
→ Yes = Annex III §3. High-risk. AI tutors that help students learn are not in scope; AI that evaluates them is.
4. Employment, workers management, or self-employment access
Does the system: (a) recruit or select people (including ad placement, application filtering, evaluation); (b) make decisions affecting work-related contractual relationships, promotion, termination; (c) allocate tasks based on personal traits; or (d) monitor or evaluate worker performance?
→ Yes = Annex III §4. High-risk. The most common SaaS trigger by a wide margin. Includes Applicant Tracking Systems with AI features, performance-management AI, AI-driven task allocation, AI candidate sourcing.
5. Access to essential private services
Does the system: (a) evaluate eligibility for essential public benefits or services (healthcare, social security); (b) evaluate creditworthiness or establish credit scores (excluding fraud detection); (c) risk-assess or price life or health insurance; or (d) prioritise emergency response (police, fire, ambulance)?
→ Yes = Annex III §5. High-risk. Note the carve-out: pure fraud detection is not high-risk under §5(b), even if it incidentally affects creditworthiness.
6. Law enforcement
Does the system: (a) assess risk of an individual becoming a victim or offender; (b) act as a polygraph or emotion-detection tool; (c) evaluate evidence reliability in investigations; (d) profile individuals to detect crimes; or (e) profile in the course of investigation, detection, or prosecution of criminal offences?
→ Yes = Annex III §6. High-risk. Almost exclusively a public-sector trigger; flagged here because some SaaS companies sell into police and intelligence services.
7. Migration, asylum, and border control
Does the system: (a) act as a polygraph or emotion-detection tool; (b) assess risks posed by an individual entering or having entered a member state; (c) examine asylum, visa, or residence permit applications; or (d) detect, recognise, or identify natural persons in the migration context?
→ Yes = Annex III §7. High-risk. Public sector + adjacent govtech vendors.
8. Administration of justice and democratic processes
Does the system: (a) assist a judicial authority in researching and interpreting facts and law and applying the law to a concrete set of facts; (b) influence the outcome of an election or referendum, or the voting behaviour of natural persons (excluding output not directly aimed at influencing voting)?
→ Yes = Annex III §8. High-risk. Legal-tech AI that helps judges falls here; legal-tech AI that helps lawyers research generally doesn't.
9. Profiling triggering Article 22 GDPR
Does your AI system make solely automated decisions with legal or similarly significant effects on individuals — without meaningful human review?
→ Yes = even if you're not in Annex III, GDPR Article 22 applies and the system has GDPR obligations on top of whatever the AI Act says. This isn't an AI Act high-risk trigger by itself, but it's the question most teams forget to ask.
10. Combination triggers
Does the system combine multiple of the above (e.g. uses biometric data for hiring decisions, or scores creditworthiness using emotion analysis)?
→ Yes = high-risk under whichever Annex III subcategory fits. Combination increases the obligations and may shift the conformity assessment from Annex VI (self-certification) to Annex VII (third-party notified body) — biometrics in particular trigger this.
What "high-risk" actually triggers
If any answer above is yes, and the Article 6(3) escape hatch (below) doesn't help you, your system is high-risk. The provider obligations are listed in Articles 9–17 and the deployer obligations in Article 26. Headline list:
- Risk management system (Art. 9)
- Data governance (Art. 10)
- Technical documentation per Annex IV (Art. 11) — ~30–60 pages
- Automatic logging (Art. 12)
- Transparency to deployers (Art. 13)
- Human oversight design (Art. 14)
- Accuracy/robustness/cybersecurity testing (Art. 15)
- Quality management system (Art. 17)
- Conformity assessment (Art. 43)
- EU declaration of conformity + CE marking (Art. 47–48)
- Registration in the EU database (Art. 49)
- Post-market monitoring (Art. 72)
- Serious incident reporting (Art. 73)
- DPIA + FRIA if personal data is involved (template here)
The application date is August 2, 2026. The 90-day playbook is here.
The Article 6(3) escape hatch
If you got a yes above, read this carefully. Article 6(3) says an AI system that would fall into Annex III is not considered high-risk if it meets all of the following:
- It performs a narrow procedural task, or
- It improves the result of a previously completed human activity, or
- It detects decision-making patterns or deviations from prior patterns and is not meant to replace or influence the previously completed human assessment without proper human review, or
- It performs a preparatory task to an assessment relevant for the purposes of Annex III.
And — critically — it does not profile natural persons.
Real examples that qualify:
- A keyword filter that flags CVs containing "Python" so the recruiter can sort more easily. Procedural task. No profiling. Out of scope.
- A spell-checker for legal documents in a court system. Improves a previously completed human activity. No profiling. Out of scope.
- A duplicate-detector that flags identical applications submitted twice. Pattern detection. No profiling. Out of scope.
Real examples that don't qualify (despite teams hoping they would):
- A "fit score" generator that ranks candidates on a 0–100 scale. This is profiling.
- An AI that drafts the recruiter's notes from candidate answers. Substantive, not procedural; influences the assessment.
- An AI chatbot that conducts a structured intake conversation with applicants. Profiling.
The Commission has signalled that interactive Q&A and any kind of personalised scoring are not eligible for the derogation. If your system does either, plan as if Article 6(3) won't help.
What to do with the result
- All no → not in Annex III. Run the chatbot post if you ship a chatbot to confirm Article 50 doesn't apply, otherwise you're minimal-risk.
- At least one yes, but Article 6(3) clearly applies → document the derogation in writing now. Article 6(4) requires you to maintain documentation of why you concluded the derogation applies. A 1-page memo with the test, the answer, and a sign-off is enough.
- At least one yes, Article 6(3) doesn't apply → you're high-risk. Start the 90-day playbook.
- Unsure → run the free classifier. It uses this same logic with more nuance, produces a written rationale, and takes 3 minutes per system.
The most expensive mistake we see is teams running this checklist once, getting "no," and never running it again. AI systems evolve. The next model update or use-case extension can flip a no to a yes. Re-run the checklist on every significant change.
Automate what this post explains.
Inventory your AI systems, classify risk, and generate the documents you'd otherwise be writing by hand. 14-day free trial. No credit card.
EU AI Act Compliance Checklist for SaaS (2026 Edition)
30-step EU AI Act compliance checklist for SaaS founders. Risk tiers, deadlines, documentation, and a free PDF download. Updated 2026.
The AI Act Vendor Questionnaire: What to Ask Your AI Providers (and the Red Flags)
If you deploy a third-party AI system, Article 26 makes you responsible for verifying your provider. Here's the questionnaire you should be sending — 25 questions across 6 categories, with the red-flag answers.
DPIA Template for AI Systems: A Plain-English Walkthrough (GDPR Article 35 + AI Act Article 27)
When you need a DPIA for an AI system, what Article 35(7) actually requires, and how to combine it with a Fundamental Rights Impact Assessment so you write the document once instead of twice.