complair.

Article 26 of the EU AI Act: What Deployers Actually Owe (and Why Most SaaS Teams Are Deployers)

CT Complair team 10 min read

The EU AI Act splits responsibility between two roles: the provider (who built or branded the AI system) and the deployer (who uses it in a professional context). The provider obligations get most of the attention because they're heavier — risk management, technical documentation, conformity assessment, CE marking. But for the average SaaS company, the more relevant chapter is the one almost nobody reads first: Article 26.

If your company uses ChatGPT to draft marketing copy, runs a third-party CV-screening tool, or routes support tickets through an AI triage product, you're a deployer. If any of those tools fall into one of the Annex III high-risk categories — and many do — Article 26 applies to you, separately and independently from whatever your vendor is doing.

This post walks through what you owe.

Provider vs deployer: a fast clarification

Two examples to anchor the distinction:

  • You build a SaaS HR product with an embedded CV-screening feature you trained yourself. Your customers pay you to use it. You're the provider of the CV-screening AI system.
  • You use Workable's CV-screening feature inside your hiring funnel. You're the deployer; Workable is the provider.

Most SaaS companies are both — provider for whatever AI feature is in their product, deployer for whatever third-party AI tools they use internally. The two roles carry separate obligations and separate fine ceilings. Article 26 is the deployer one.

When does Article 26 actually apply?

Article 26 applies when all three of the following are true:

  1. You're a deployer (you use the system, you didn't build it).
  2. The AI system is high-risk under Annex III or Annex I.
  3. You're using it in the EU, or its output is used in the EU.

If any of those is false, Article 26 doesn't fire. A deployer of a limited-risk chatbot owes the Article 50 transparency obligations, not Article 26. A deployer of a minimal-risk tool owes nothing under the AI Act (though GDPR may still apply).

The applicable date is August 2, 2026 for new deployments, with a narrow grace period for systems already in active use before that date — but the grace period evaporates the moment the system undergoes a "significant change," which in practice means the next vendor update.

The seven Article 26 obligations, plain English

Article 26 has 11 numbered paragraphs. The seven that matter operationally:

1. Use the system within the provider's instructions (Art. 26(1))

You must use the high-risk AI system in accordance with its instructions for use. The provider must give you those instructions under Article 13 — they include the system's intended purpose, accuracy expectations, foreseeable misuse, and human-oversight requirements.

What to do: read them. Keep a copy. If you use the system for something the instructions don't cover, you may have inadvertently become the provider yourself under Article 25(1)(c) — at which point the heavy provider obligations apply to you instead.

2. Assign trained human oversight (Art. 26(2))

You must "assign human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support."

What to do: name the people. Document their training. Make sure they have actual authority to override the system — not just a job title that says they could. A note in your wiki saying "X owns AI oversight" plus a signed AI literacy training record is the minimum viable evidence.

3. Ensure input data is appropriate (Art. 26(4))

To the extent you control the input data, you must make sure it's "relevant and sufficiently representative in view of the intended purpose."

What to do: if you're feeding customer records into a high-risk system, you're responsible for the data quality. This is the deployer-side mirror of the provider's Article 10 obligations. In practice: keep a record of where the input data came from, how it was sampled, and what its known biases are.

4. Monitor the system in operation (Art. 26(5))

You must monitor the system for issues and inform the provider of any incidents or risks observed in normal use.

What to do: spot-check outputs. Review user complaints. Maintain a feedback channel to the provider's support or trust team. When you see something off — a discriminatory output, an unexplained error rate spike, an obvious accuracy regression — write it down and tell them.

5. Retain logs for at least 6 months (Art. 26(6))

The Article 12 logs the provider generates — automatic per-event records of the system's operation — must be retained by you, the deployer, for at least six months unless EU or member-state law requires longer.

What to do: this is the obligation everyone underestimates. Six months of per-inference logs for an AI system processing 1M requests a day is non-trivial storage. Make sure your logging configuration captures what Article 12 requires, and your retention policy doesn't auto-delete in 90 days. Ask your provider for the Article 12 logging schema — they should publish it.

6. Inform workers and their representatives before deployment (Art. 26(7))

If the high-risk system will be used in the workplace — and "workplace" is interpreted broadly — you must inform affected workers and their representatives before rolling it out. Member states may impose stricter notification rules; check your local labour law.

What to do: write a one-pager describing what the system does, what data it processes, what decisions it influences, and how to escalate concerns. Distribute it. Keep proof of distribution. If you have a works council or employee representatives, they should review before launch, not after.

7. Inform affected individuals (Art. 26(11))

If the high-risk system makes decisions or assists decisions concerning individuals — applicants, customers, students, patients — those individuals must be informed they're subject to a high-risk AI system.

What to do: a clear notice in the user-facing flow at the point the AI runs. "This application is reviewed using an automated screening system" plus a link to a longer page covering what data is processed, what factors influence the decision, and how to request human review. This sits at the intersection of AI Act Article 26(11) and GDPR Articles 13/14 + 22 — a single combined notice covers both.

The two extras for public-sector deployers

If you're deploying a high-risk AI system as a public authority — or as a private body delivering a public service — you also owe:

  • Fundamental Rights Impact Assessment (Article 27) — before first deployment, document the categories of natural persons affected, the harm they could suffer, the human-oversight measures, and the complaint channels. This combines cleanly with a GDPR DPIA into a single document.
  • EU database registration (Article 49(3)) — register the deployment in the public EU database before going live.

Most B2B SaaS companies aren't deploying to the public sector — they're selling to private businesses who deploy. But if your end-customer is a government agency, you'll be asked for a FRIA-shaped document, even if the obligation legally sits with them.

Two flags that turn you from deployer into provider

Two specific actions move you out of the deployer chapter and into the much heavier provider chapter:

  1. You put your name or trademark on the high-risk system (Art. 25(1)(a)). Reselling Workable as "AcmeHire" makes you the provider of AcmeHire, with full Article 9–17 obligations.
  2. You make a substantial modification to the system, or use it for a purpose other than what the original provider intended (Art. 25(1)(c)). Fine-tuning a model for your own use case, or using a fraud detector to make hiring decisions, both flip you into provider status.

If either applies, stop reading this post and read the provider-focused decision tree.

Penalty exposure

Article 26 violations sit in the Tier 2 fine bracket under Article 99: up to €15M or 3% of global annual turnover, whichever is higher. National authorities are required to take SME size into account, but the statutory ceiling is what the headlines will report.

The realistic near-term risk is rarely the fine itself. It's the lost enterprise deal when a procurement team asks for your deployer documentation in an RFP and you don't have it. As of mid-2026, every German enterprise we've talked to has Article 26 evidence requests in their vendor onboarding template.

A pragmatic 30-day plan for deployers

Before you do anything else:

  • Day 1: list every third-party AI tool your company uses. Include the obvious ones (ChatGPT, Copilot, Notion AI, Gong, Pylon) and the embedded ones (Salesforce Einstein, HubSpot AI features, Stripe Radar).
  • Day 2–3: classify each one. The free classifier handles this. Most internal tools are limited or minimal risk; flag any that touch hiring, credit, education, or essential services for closer review.
  • Day 4–10: for each high-risk tool, request the provider's Article 13 instructions for use. Most legitimate vendors publish these on a trust page or hand them out under NDA. If a vendor cannot produce instructions, you have a problem — note it in your risk register.
  • Day 11–17: write your own internal documentation for each high-risk deployment: who's the named human overseer, what's the input-data control plan, where are the logs stored, what's the retention period.
  • Day 18–25: send the vendor questionnaire to providers who haven't published trust documentation. Track responses.
  • Day 26–30: distribute worker and affected-individual notices for any high-risk system used internally or customer-facing.

Most teams of 10–50 finish this in three calendar weeks, working around their day jobs.

Where Complair fits

Complair is built around the deployer workflow specifically because most SaaS teams are deployers first and providers second. The vendor-questionnaire feature is the Article 26 evidence-collection workflow. The audit log is the per-system human-oversight evidence. The DPIA generator handles Article 26's GDPR overlap. The compliance assistant answers "is this an Article 26 obligation or an Article 9 obligation" so you don't have to read the regulation to know.

If you're a deployer of one or more high-risk systems and you're staring at August 2, 2026, the cheapest path through is: classify with the free tool, send vendor questionnaires, document your oversight, retain your logs. That's most of Article 26. The rest is paperwork that shouldn't take more than a week.

Share X LinkedIn Email
Complair

Automate what this post explains.

Inventory your AI systems, classify risk, and generate the documents you'd otherwise be writing by hand. 14-day free trial. No credit card.

Related reading