1. Overview
Complair uses AI to accelerate compliance work — risk classification, document drafting, suggested questionnaire answers, and an in-product compliance assistant. We use third-party foundation models (primarily Anthropic's Claude) under commercial contracts that prohibit training on our prompts. We do not own, fine-tune, or train any AI model.
This page is our voluntary application of EU AI Act Articles 13 (transparency to users of high-risk systems, applied as best practice even though our systems are limited-risk) and Article 50 (transparency obligations for limited-risk AI systems interacting with humans). It is also the public-facing piece of our internal AI register.
2. Risk classification (Article 6)
We have classified each AI feature in Complair against EU AI Act Article 6 and Annex III. None of our AI features are high-risk under Annex III. All current AI features qualify as limited-risk under Article 50, with transparency obligations.
Reasoning, summary form:
- We do not perform any function listed in Annex III categories 1–8 (biometric ID, critical infrastructure, education evaluation, employment scoring, essential services, law enforcement, migration, justice).
- Our features perform either narrow procedural tasks (Article 6(3)(a) — summarisation, classification by predefined criteria) or preparatory tasks for human review (Article 6(3)(d)).
- We do not perform profiling of natural persons (Article 6(3) second paragraph). Our classification works on AI systems, not natural persons.
Full classification reasoning, including the Annex III walkthrough for each feature, is available on request at privacy@complair.eu.
3. AI features in Complair
Every AI-powered feature, what it does, and the data it sees.
| Feature | What it does | Data sent to model | Plan |
|---|---|---|---|
| AI risk classifier | Classifies an AI system against EU AI Act Article 6 + Annex III; returns tier + reasoning. | System name, purpose, role, deployment context (user-supplied; no PII required). | All tiers |
| Document generator | Drafts compliance artefacts (ROPA entries, transparency notices, technical documentation). | Workspace company profile, AI system metadata, document template variables. | Growth+ |
| Compliance assistant | In-product chat for compliance questions, scoped to your workspace context. | User question, retrieved relevant records from the workspace, conversation history. | Scale+ |
| Suggested questionnaire answers | Suggests an answer for a buyer questionnaire question based on canonical answers + obligation context. | Question text, retrieved canonical answers, mapped obligation. | Scale+ |
All AI-generated content is labelled in the interface as such (Article 50 obligation).
4. Models we use
- Anthropic Claude (Sonnet, Haiku, Opus families). Used for classification, drafting, and assistant. Accessed via Anthropic's commercial API. Anthropic does not train models on our prompts under their commercial terms.
- Admin-selectable fallback (OpenAI GPT, Google Gemini). Workspace administrators on Enterprise plans may select an alternative provider. The same no-training contractual posture applies — we don't enable a model whose contract permits training on our prompts.
All model providers are listed in our sub-processor list.
5. Data flow
What we send to AI models, what we don't, and what we do with the response.
- What we send. Only the data needed for the specific feature — usually short prompts assembled from your workspace records (AI system descriptions, questionnaire questions, generated-document templates).
- What we don't send. User passwords, API keys, audit logs, billing details, or any data not relevant to the prompt being processed. We minimise.
- Logging. We log the prompt, the retrieved context, and the response for 90 days for debugging, abuse detection, and Article 12 logging compliance. Logs are scrubbed for sensitive fields.
- No training. Anthropic, OpenAI, and Google Gemini commercial APIs all contractually prohibit training on customer prompts. We have no models of our own to train.
- Retention. Model providers retain prompts for up to 30 days for abuse detection and then delete. We retain our own logs for 90 days.
6. How to opt out
- Workspace-level disable. Admins can disable AI features for the entire workspace in Settings → AI Features. The product remains usable; AI-assisted actions are replaced with manual flows.
- Per-user disable. Individual users can disable AI suggestions in their personal preferences without affecting the workspace.
- Vendor-level disable. On Enterprise plans, admins can choose which model provider is used (Anthropic, OpenAI, Gemini) or disable all third-party model use entirely. With all providers disabled, AI features show a clear "AI disabled" notice.
- End-user notice. Wherever Complair generates content for end-users (e.g. published trust-centre pages), AI-generated portions are clearly labelled.
7. Our Article 50 obligations to you
- Disclosure of AI interaction. We tell users at the point of interaction whenever they are interacting with AI rather than a human (Article 50(1)).
- Labelling of AI-generated content. AI-generated text in Complair is visually labelled in the UI (Article 50(4)).
- No deepfakes or manipulation. Complair does not generate synthetic media of natural persons or content designed to manipulate behaviour. Article 50(4) deepfake-labelling obligations don't apply because we don't generate the content type.
- Reasonable accommodations. Users with disabilities can request human review of any AI output without penalty. EAA accessibility obligations apply.
8. Change log
Changes to AI features, models, or risk classification, ordered most recent first.
- 2026-05-04 — Initial publication of AI transparency page. All current features classified as limited-risk under Article 50.
Material changes (new feature, new model provider, new use case) are noted here within 5 business days. Customers on Enterprise plans receive 30-day advance notice via email.
9. Contact
Questions about an AI feature, model usage, or to request the full classification reasoning: privacy@complair.eu.
Related: Sub-processor list · Security overview · DPA.