complair.

Is My Chatbot High-Risk Under the EU AI Act?

CT Complair team 8 min read

Short answer: probably not. The vast majority of customer-facing SaaS chatbots — the support bot on your website, the in-app assistant that explains pricing, the AI that helps a user draft a message — are limited-risk under the EU AI Act. Your obligation is transparency, not the full Annex III stack.

But "probably not" isn't a defensible position when an enterprise procurement team asks. This post is the decision tree.

The default classification: limited risk

Article 50(1) covers AI systems "intended to interact directly with natural persons." That's basically every chatbot. The obligation is straightforward: the user has to know they're talking to an AI, unless that's already obvious from the context.

That's it. No risk register, no Annex IV technical file, no conformity assessment, no CE marking, no EU database registration. Just disclosure. The full provider obligations of Articles 9–17 do not apply.

If your chatbot does normal customer-support things — answers product questions, helps with onboarding, recommends documentation, schedules calls — you're in this bucket. Skip to "What disclosure looks like" below.

When a chatbot becomes high-risk

A chatbot becomes high-risk when its function falls into one of the eight Annex III categories, regardless of the fact that it presents as a chat interface. The chat UI is a wrapper; the underlying use case is what gets classified.

The eight realistic ways a SaaS chatbot ends up high-risk:

1. It triages job applicants

If your chatbot interviews candidates, ranks them, recommends them to recruiters, or makes any kind of routing decision based on their responses, you're in Annex III §4 (employment). High-risk. The fact that a human still makes the final hire doesn't matter; the chatbot is in the decision chain.

This is the single most common way a chatbot becomes high-risk. "AI screening assistants" marketed at HR teams are high-risk by default.

2. It assesses creditworthiness or insurance risk

Any chatbot that asks questions and outputs an approval, denial, score, or risk tier for credit, insurance, or essential financial services is in Annex III §5(b). High-risk.

A chatbot that answers "how does interest work?" is fine. A chatbot that asks for income and employment data and tells the user whether they qualify for a loan is high-risk.

3. It makes decisions about access to public services

Eligibility chatbots for housing assistance, unemployment benefits, public health programs, or other essential public services fall in Annex III §5(a). High-risk.

4. It evaluates students

Chatbots used to evaluate learning outcomes, allocate students to courses, or detect prohibited behaviour during tests are in Annex III §3. High-risk.

A chatbot that helps a student do their homework is not high-risk, even though it's used in education. The deciding factor is whether the chatbot's output affects the student's record or progression.

5. It identifies people biometrically

Voice-recognition chatbots that authenticate the speaker — "this is John Smith, account number verified by voice" — are biometric identification systems under Annex III §1(a). High-risk.

A chatbot that uses voice as an input modality but doesn't identify the speaker (it just transcribes for natural-language input) is not in this bucket.

6. It infers emotion in workplace or educational contexts

Article 5(1)(f) actually bans emotion-recognition systems in workplaces and educational institutions outright (with narrow medical/safety exceptions). If your chatbot is built for those contexts and infers emotion, it's not high-risk — it's prohibited. Different problem, worse fine bracket.

7. It's used by law enforcement or migration authorities

Any chatbot deployed by police, border agencies, or asylum authorities falls in Annex III §6 or §7. High-risk and politically charged.

8. It administers justice or supports democratic processes

Chatbots used by courts to assist in interpreting facts, applying law, or resolving disputes fall in Annex III §8(a). Chatbots that influence elections fall in Annex III §8(b). Both high-risk.

The Article 6(3) escape hatch (and why it rarely helps for chatbots)

Article 6(3) provides a narrow derogation: an AI system is not high-risk if it only: - performs a narrow procedural task, or - improves the output of a previously completed human activity, or - detects decision-making patterns without replacing human assessment, or - performs a preparatory task for a high-risk assessment.

And it doesn't profile individuals.

For chatbots, the profiling carve-out is usually fatal. A chatbot that conducts a structured intake conversation with a user is profiling them by definition — building a picture of their characteristics from their answers. The Commission's draft guidance is clear that interactive question-and-answer flows are "profiling" for these purposes.

Realistic outcome: an FAQ-style chatbot that retrieves predefined documentation answers is probably outside Annex III via the procedural-task derogation. A conversational chatbot that personalises responses is probably not eligible for the derogation, even if it ends up minimal-risk for unrelated reasons.

What disclosure looks like under Article 50

If you're limited-risk (most chatbots), here's what the obligation actually requires:

  • The user must be informed they're interacting with an AI system unless that's obvious from context (a chat window labelled "AI Assistant" with a robot icon arguably counts as obvious; a chat window that looks identical to a human-staffed live chat does not).
  • The disclosure must happen at the start of the interaction, not after the user has already shared information.
  • The disclosure must be in plain language, accessible to the audience.
  • AI-generated audio, image, video, and text content must be machine-readably labelled if the system creates or substantially modifies that content (Article 50(2)).

Concrete sample wording for a customer-support bot:

👋 Hi! I'm Acme's AI Support Assistant. I can answer most product questions and help with common issues. If you'd like a human agent, type "human" anytime. Your conversation is logged for quality and compliance purposes.

That single sentence at the start of every new conversation, plus a persistent label in the chat header, satisfies Article 50(1) for the typical SaaS chatbot.

For Article 50(2) — AI-generated content labels — the practical implementation depends on the channel. For text generated and shipped via API, you can include a disclosure in the response payload. For images, the C2PA content credentials standard is becoming the de-facto machine-readable label. For audio and video, watermarking is more involved; the Commission has signalled it will publish technical guidance later in 2026.

What you do not need to do for a limited-risk chatbot

Just so the contrast is clear:

  • ❌ No risk management system (Article 9)
  • ❌ No data governance documentation (Article 10)
  • ❌ No 30-page technical file (Article 11 + Annex IV)
  • ❌ No automatic per-event logging (Article 12) for AI Act purposes — though GDPR may still apply
  • ❌ No formal human-oversight design (Article 14)
  • ❌ No conformity assessment (Article 43)
  • ❌ No CE marking (Article 47)
  • ❌ No EU database registration (Article 49)
  • ❌ No post-market monitoring plan (Article 72)

You may still owe GDPR obligations on the chatbot's data processing — a DPIA if it processes personal data on a large scale, an Article 13 privacy notice, a legal basis under Article 6 — but those are not AI Act obligations.

The two most common misclassifications

We see two errors over and over:

  1. "Our support bot is high-risk because it's customer-facing." No. Customer-facing is not a high-risk trigger. Annex III is — and customer support is not in Annex III. Limited-risk + Article 50 disclosure is the answer.
  2. "Our HR onboarding bot isn't high-risk because it doesn't make hiring decisions." Probably wrong. If the bot screens, ranks, schedules interviews, or decides who advances, it's in Annex III §4 (employment). The fact that humans rubber-stamp the output doesn't change the classification.

When in doubt, run the free classifier. It takes 3 minutes, asks the right questions, and gives you a written rationale you can share with a lawyer or a customer.

What to do this week if you ship a chatbot

The deadline for transparency obligations is August 2, 2026 — same as the high-risk obligations. Limited-risk is easier to comply with, but it's not optional.

Share X LinkedIn Email
Complair

Automate what this post explains.

Inventory your AI systems, classify risk, and generate the documents you'd otherwise be writing by hand. 14-day free trial. No credit card.

Related reading