complair.

The EU AI Act Deadline Is August 2, 2026 — Here's What SMB SaaS Teams Should Do in the Next 90 Days

CT Complair team 10 min read

The date is August 2, 2026. That's when the bulk of the EU AI Act — Articles 6–27 for high-risk systems listed in Annex III, plus the transparency obligations in Article 50 — becomes applicable. If you fit any of the eight Annex III categories and haven't started, you have ~105 days from today.

This post is for the SMB SaaS team that just realised the Act applies to them and is deciding whether to panic, outsource, or sprint. Short version: sprint, don't panic, probably don't outsource.

What actually changes on August 2, 2026

The AI Act entered into force on August 1, 2024, but it applies in waves. The calendar looks like this:

Date What applies Source
Feb 2, 2025 Article 5 (prohibited practices) + Article 4 (AI literacy) Already in force
Aug 2, 2025 Chapter V (General-Purpose AI models) + Article 99 (penalties) Already in force
Aug 2, 2026 Articles 6–27 (high-risk Annex III), Article 50 (transparency) This post
Aug 2, 2027 Annex I high-risk (AI as safety component of regulated products) Next year

On August 2, 2026, new high-risk Annex III systems placed on the EU market must meet every Chapter III obligation from day one. Systems already on the market before that date get until August 2, 2027 to come into compliance — but this transition protection is narrow and is lost if you make "significant changes" to the system (Article 111(2)). A model update, a new use case, a major UX change can all restart the clock.

In practice, most SaaS teams will treat August 2, 2026 as the real deadline because their products ship updates every two weeks, and every ship is a "significant change" risk.

What "applies" actually means, operationally

Saying "Articles 6–27 apply" is the legal way of saying: by that date, a high-risk system must have all of the following in place, and a market-surveillance authority that asks for any of them must receive them within the statutory deadline (usually 15 days):

  1. A documented risk-management system (Article 9)
  2. Data-governance procedures for training/validation/test data (Article 10)
  3. Technical documentation matching Annex IV (Article 11) — roughly 30–60 pages
  4. Automatic logging over the system's lifetime (Article 12)
  5. User-facing transparency + instructions for use (Article 13)
  6. Human-oversight features built into the product (Article 14)
  7. Documented accuracy, robustness, and cybersecurity testing (Article 15)
  8. A quality management system (Article 17)
  9. A passed conformity assessment (Article 43)
  10. An EU declaration of conformity + CE marking (Article 47–48)
  11. Registration in the EU database (Article 49)
  12. A post-market monitoring plan (Article 72)
  13. An incident-reporting process (Article 73)

A deployer using a high-risk system — even one you didn't build — also has obligations under Article 26: instructions compliance, human oversight, log retention (≥6 months), worker notification, information to affected individuals. In some public-sector contexts, a Fundamental Rights Impact Assessment (Article 27) must happen before deployment.

The penalty structure (Article 99)

Three tiers, enforced by national market-surveillance authorities, appealable to national courts:

Tier Max fine What triggers it
1 €35M or 7% global turnover (whichever is higher) Violating Article 5 (prohibited practices)
2 €15M or 3% global turnover Non-compliance with provider/deployer obligations (Articles 9–27), notified-body failures, transparency failures
3 €7.5M or 1.5% global turnover Supplying incorrect, incomplete, or misleading information to authorities or notified bodies

A Series-A SMB SaaS with €5M ARR faces either a €15M fine cap (statute) or 3% × €5M = €150k (turnover). National authorities must "take into account" SME size (Article 99(7)) but the statute caps are what appear in settlements.

The realistic near-term penalty is rarely a fine. It's a sales-block — enterprise buyers will ask for your EU declaration of conformity in the RFP. Saying "we're working on it" will lose deals starting Q2 2026.

The Digital Omnibus rumour

In early 2026 the Commission floated a "Digital Omnibus" package that would — might — delay parts of the AI Act, especially for SMEs. Reuters, Politico EU, and several Brussels-bubble newsletters have carried versions of this over the last few months.

As of April 2026: there is no published Commission proposal that delays August 2, 2026. Council conclusions have been vague. The European Parliament's internal positions are split. National regulators (France's CNIL, Germany's BfDI) have publicly said they will enforce as scheduled. Multiple member-state AI-coordination offices are already advertising compliance hotlines for August 2026.

Do not plan your roadmap around a delay that hasn't happened. Even if a delay lands in June 2026, the obligations will fire eventually, and every month of delay is a month of accrued technical debt that you'll have to unwind under time pressure. Treat August 2, 2026 as real.

A phased 90-day plan for SMB SaaS

Reading Articles 9–27 as a single checklist is overwhelming. Here's the sequence we've seen work at teams of 5–50 engineers.

Days 1–14: Inventory and classify

Before you can document a high-risk AI system, you need to know which of your features is one. Most SaaS teams discover they have more AI systems than they thought — every "smart suggestion", "automated match", or "Powered by GPT" feature is an AI system under Article 3(1).

Concrete tasks:

  • [ ] Build an AI system register. For each feature: name, purpose, inputs, outputs, model provider, human-oversight design, whether it's used in one of the Annex III categories.
  • [ ] Classify each one: unacceptable / high / limited / minimal. Use the free classifier for a first pass, then confirm with a lawyer for anything that comes back "high" or "unclear".
  • [ ] For each high-risk system, identify: are you the provider (you built or branded it) or the deployer (you use it)? Obligations differ significantly — see the checklist post.
  • [ ] Flag anything that hits Article 5 (prohibited practices). If you find one, stop selling it into the EU immediately; that's already illegal since February 2025.

Deliverable: a spreadsheet with one row per AI system and a tier label. Most teams end up with 6–12 rows.

Days 15–45: Document the high-risk systems

This is the slow part. Annex IV lists nine documentation sections for each high-risk system. Without a template, you'll lose three weeks formatting; with a template, it's about one week per system.

Concrete tasks:

  • [ ] Draft technical documentation (Annex IV) for each high-risk system. Sections: general description, detailed design, data requirements, monitoring/accuracy/robustness, risk-management system, change log, harmonised standards, EU declaration, post-market monitoring plan.
  • [ ] Draft the risk-management system (Article 9): one document per system describing how risks are identified, evaluated, mitigated, and reviewed throughout the lifecycle.
  • [ ] Draft the quality management system (Article 17): one document covering your SDLC, testing, change management, incident response, supplier management. If you have an ISO-27001 or SOC 2 report, 70% of this is already done — reuse.
  • [ ] Write instructions for use (Article 13): what the system does, expected accuracy, known limitations, required training for the operator, what it should not be used for.
  • [ ] Decide conformity path (Article 43): for most Annex III categories, self-certification under Annex VI (internal control) is available. For biometric systems, you need a notified body — identify one now because lead times are 4–9 months.

Deliverable: a docs/ai-act/ folder per high-risk system with at least 40–60 pages of documentation.

Days 46–90: Instrument oversight and logging

Writing docs is cheap; changing the product is expensive. This is where engineering time goes.

Concrete tasks:

  • [ ] Add Article 12 logging: every inference, every decision, every human override, every model version, every input. Retain ≥6 months. For SaaS, this usually means a dedicated event bus — don't wedge it into your existing product analytics.
  • [ ] Add Article 14 human oversight to the UI: the person operating the system must be able to understand the output, override it, and stop it. For a CV-screening tool this means showing the reasoning, letting a recruiter override the score, and retaining that override in the logs.
  • [ ] Add Article 50 transparency disclosures: chatbot users must be told they're talking to AI, AI-generated content must carry a machine-readable marker (C2PA is the emerging standard).
  • [ ] Stand up a post-market monitoring plan (Article 72): a scheduled review — quarterly is common — where you look at log anomalies, user complaints, incidents, and update the risk register.
  • [ ] Stand up a serious-incident reporting process (Article 73): define the trigger criteria, the response team, the 15-day reporting deadline (72 hours for "widespread" infringements).
  • [ ] Register each high-risk system in the EU database (Article 49) before go-live. Registration is free but can take a few business days to process.
  • [ ] Sign the EU declaration of conformity (Article 47) and apply CE marking (Article 48) where the product is sold as a physical good. For SaaS, the CE mark appears on the user-facing documentation.

Deliverable: a product that can demonstrate, for any inference: (a) the logs exist, (b) a human could have overseen it, (c) the system announced itself where required.

What you can safely defer

A few things people stress about that you genuinely don't need by August 2, 2026:

  • Perfect model explainability. Article 13 requires transparency to the deployer, not a SHAP plot per prediction. Documented expected behaviour + accuracy bounds is enough.
  • Zero false positives. Article 15 requires accuracy "appropriate" for the intended purpose, documented. You don't need 100%, you need a documented threshold and a testing procedure.
  • A separate AI governance team. For teams under 50 engineers, an existing security/compliance lead owning this is fine. The Act doesn't require an AI-specific DPO-equivalent.
  • Translating everything into 24 EU languages. The declaration of conformity must be in a language determined by the market-surveillance authority — English is accepted in practice across all member states. Your instructions for use can stay English.

When to bring in external help

Three cases:

  1. You're in biometrics (Annex III §1). Notified body engagement is complex, and the timeline is tight. An external consultant saves months.
  2. You're public-sector-deployed in law enforcement, migration, or justice. Article 27 FRIA is detailed; get it reviewed.
  3. You already have a paying EU enterprise customer asking for the documentation by Q3 2026. Their procurement team will ask questions that are easier to answer via an external assessor's letter than a founder's blog post.

For everything else — standard employment, credit, education, essential-services SaaS — most SMBs can do this in-house with a solid template and a Friday-afternoon review cadence. That is the core bet behind Complair: you don't need a consultant, you need a workspace that makes the 90 obligations tractable.

A short closing

The EU AI Act is unusually well-drafted for an EU regulation. The obligations are specific, the penalties are graduated, and the deadlines are stable. You can meet them. You just can't start in July 2026.

If you haven't classified your systems yet, run the free classifier — it takes 3 minutes and produces a written reasoning you can hand to a lawyer or a customer. If you find you're high-risk, the Annex III explainer covers what triggers what, and the checklist covers the decision tree.

The deadline is August 2, 2026. That's a specific Sunday. Don't be reading this on July 30.

Share X LinkedIn Email
Complair

Automate what this post explains.

Inventory your AI systems, classify risk, and generate the documents you'd otherwise be writing by hand. 14-day free trial. No credit card.

Related reading