complair.

DPIA Template for AI Systems: A Plain-English Walkthrough (GDPR Article 35 + AI Act Article 27)

CT Complair team 10 min read

A Data Protection Impact Assessment (DPIA) is a structured risk evaluation you run before you start a high-risk processing operation. Skipping it is one of the cheapest ways to attract a seven-figure GDPR fine — Italian, Spanish, and German regulators have all published enforcement actions in the last 18 months where the missing DPIA was the headline finding.

For AI systems specifically, you need a DPIA more often than you think, and you can save real time by combining it with the AI Act's Fundamental Rights Impact Assessment (FRIA) under Article 27. This post is the practical template walkthrough.

When you need a DPIA

GDPR Article 35(1) requires a DPIA whenever processing is "likely to result in a high risk to the rights and freedoms of natural persons." Article 35(3) names three clear triggers, but the EDPB and national supervisory authorities have added many more in published guidance.

For AI systems, the triggers that fire most often:

  1. Profiling with legal or significant effects (Art. 35(3)(a)). Any AI scoring, ranking, or eligibility-determination system aimed at individuals. CV screening, credit scoring, insurance pricing, fraud detection.
  2. Large-scale processing of special-category data (Art. 35(3)(b)). Health, biometric, genetic, religion, political opinion, sexual orientation, criminal record. AI systems that touch any of these need a DPIA almost by default.
  3. Innovative use of new technology (EDPB criterion). The EDPB explicitly lists "use of artificial intelligence" as a factor that pushes a processing operation toward high-risk in conjunction with other factors.
  4. Combining or matching datasets from sources collected for different purposes. Common in AI training pipelines.
  5. Data processed on a large scale that could prevent data subjects from exercising their rights or using a service.

A useful heuristic: if your AI system is high-risk under the EU AI Act's Annex III, it almost certainly needs a DPIA under GDPR. The two regimes overlap heavily.

You should also check your national supervisory authority's mandatory DPIA list. Every member-state regulator publishes a list of processing operations that automatically trigger a DPIA in their jurisdiction. Common additions: AI-based scoring for insurance, biometric employee identification, IoT health tracking. The CNIL list (France), the BfDI list (Germany), and the Garante list (Italy) are the three most-referenced and the most thorough.

What a DPIA must contain — Article 35(7)

The minimum legal contents are deceptively short:

(a) a systematic description of the envisaged processing operations and the purposes of the processing, including, where applicable, the legitimate interest pursued by the controller;

(b) an assessment of the necessity and proportionality of the processing operations in relation to the purposes;

(c) an assessment of the risks to the rights and freedoms of data subjects referred to in paragraph 1; and

(d) the measures envisaged to address the risks, including safeguards, security measures and mechanisms to ensure the protection of personal data.

That's it. Four sections. But each one expands considerably in practice if you want it to pass review.

The DPIA template structure that works

Here's the section-by-section structure we use in Complair's auto-generated DPIA. It covers Article 35(7) plus the EDPB Working Party 248 criteria, plus AI Act Article 27 if you're combining (more on that below).

Section 1: Document control

Boring but mandatory. Owner, date, version, sign-off, review schedule. A DPIA without a review date is a one-time document; the regulator expects an iterative one. Set the review trigger at: every 12 months or on significant change to the system.

Section 2: Description of the processing (Art. 35(7)(a))

For an AI system, this section needs:

  • The system's name, purpose, and intended use case
  • The categories of personal data processed (with explicit flagging of special categories)
  • The data sources — your own, public, third-party, scraped, synthetic
  • The categories of data subjects affected
  • The recipients of the data — including any model providers, infrastructure vendors, sub-processors
  • Retention periods for inputs, outputs, training data, logs
  • Any cross-border transfers and the safeguard relied on (SCCs, adequacy decision, EU-US Data Privacy Framework)
  • The legal basis under Article 6 (and Article 9 if special-category data is involved)

The trick: if you already have an AI system register and a vendor questionnaire system, this section writes itself from your inventory data. That's the biggest argument for keeping the inventory current — the DPIA is downstream.

Section 3: Necessity and proportionality (Art. 35(7)(b))

This is where most DIY DPIAs get thin. Two specific questions you must answer in writing:

  • Necessity: is the AI system the minimum you need to achieve the legitimate purpose, or could you do it without AI / with less data / with anonymised data?
  • Proportionality: are the impacts on the data subject proportionate to the benefit? Quantify both sides where possible — "we need this to triage 40k applications a week which we couldn't do manually" is a real proportionality argument; "AI is more efficient" is not.

If you used Article 6(1)(f) legitimate interests as your legal basis, this section also has to do double duty as your Legitimate Interests Assessment. Easier to do them together.

Section 4: Risk assessment (Art. 35(7)(c))

Walk through each foreseeable risk and grade it on likelihood × severity. For AI systems specifically, the risks worth listing:

  • Discrimination through biased training data or biased outputs
  • Inaccuracy producing wrong decisions about individuals
  • Loss of human autonomy through over-reliance on automated decisions
  • Inability to exercise GDPR rights (access, erasure, objection) due to model architecture
  • Data breach through model inversion, training-data extraction, or prompt-injection
  • Function creep — the AI being used for purposes it wasn't designed for
  • Unintended re-identification of anonymised inputs
  • Vendor lock-in preventing safe exit and erasure

Each risk gets a score and a written description of the affected rights (privacy, non-discrimination, freedom of expression, fair treatment, right to remedy).

Section 5: Mitigation measures (Art. 35(7)(d))

For each risk in Section 4, document what you're doing about it. Be specific:

  • "Bias mitigation: quarterly fairness audit across protected attributes; thresholds documented in §5.2; results reviewed by [named role]"
  • not: "We follow industry best practices on fairness"

The split between technical measures (encryption, access control, model evaluation, output filtering), organisational measures (training, role assignments, escalation procedures, incident response), and contractual measures (vendor DPAs, sub-processor approval workflows) is helpful for review.

Section 6: Human oversight arrangements

If you're also writing an AI Act FRIA, this section satisfies both regimes. Document:

  • The named role responsible for oversight
  • The intervention mechanism (UI control, override workflow, fallback path)
  • The review cadence
  • The training the overseer has received (Article 4 AI literacy plus role-specific)
  • The escalation path

Section 7: Consultation requirements (Art. 36)

If your residual risk after mitigation is still high, you must consult your supervisory authority before starting the processing. This is a hard legal requirement, often missed. The supervisory authority has up to 8 weeks to respond.

In practice, you flag this as a P0 action item with the contact details for your lead supervisory authority (your one-stop-shop in your member state of establishment).

Section 8: Sign-off

Named DPO opinion (mandatory if you have a DPO under Article 37). Named senior business owner. Date. Next review date.

The combined DPIA + FRIA approach

The AI Act's Article 27 introduces a Fundamental Rights Impact Assessment for high-risk AI systems deployed by public authorities or by private bodies providing public services. The FRIA's contents (Article 27(1)) are similar but not identical to the DPIA: process description, categories of natural persons affected, foreseeable harms, human oversight, complaint mechanism.

The two assessments overlap by ~70%. Article 27(4) explicitly allows you to combine them into a single document if you're already running a DPIA. Almost everyone should — running two parallel processes for the same system is wasteful.

Complair's combined-DPIA-FRIA template merges the two structures into one document with cross-references to the relevant Articles. The added FRIA-specific sections are: explicit fundamental-rights mapping (non-discrimination, privacy, dignity, freedom of expression, effective remedy, democratic participation), affected-persons categories, and the complaint mechanism for affected individuals.

What a regulator looks for in review

Three things, in our experience reviewing DPIAs for compliance teams:

  1. Specificity. "We process personal data" is not specific. "We process applicant CVs containing name, email, work history, and uploaded photos for the purpose of generating a top-50 ranked shortlist for the recruiter" is specific.
  2. Measured risk. The risks must be graded on a defensible scale, and the residual risk after mitigation must be calculated, not asserted. Saying "after our measures the risk is low" without showing the math will not survive review.
  3. A real review cycle. A DPIA from 2023 with no review entries is an indication of process failure. Make sure your DPIA has a review log appended every quarter, even if the review is "nothing changed, no action."

Common DPIA mistakes for AI systems

  • Calling a risk register a DPIA. A risk register protects the business; a DPIA protects the data subject. They overlap but they're not the same.
  • Skipping the necessity test. "We need AI to do this faster" is not a necessity argument; "we couldn't do this at all without AI" is.
  • Treating the DPIA as one-and-done. It must be updated when the processing changes — new training data, new use case, new model, new vendor. Article 35(11).
  • No DPO sign-off when one is required. Under Article 37 you may need a DPO; Article 35(2) requires their input on the DPIA. Skipping this is a process violation in itself.
  • Forgetting Article 36 prior consultation. If residual risk is high, you owe the supervisory authority a heads-up before processing starts. This is the single most-missed step.

How Complair generates this document

When you flag an AI system in Complair as processing personal data and the risk classifier returns "high," the workspace auto-creates a combined DPIA + FRIA document pre-filled from your inventory: system description, data categories, recipients, retention, legal basis, and human-oversight design come straight from the system record. You fill in the necessity/proportionality and mitigation sections — those are the parts that need your judgement — and the document drops out as PDF or Word for sign-off.

If you'd rather draft from scratch, the template above is the structure that satisfies both Article 35(7) GDPR and Article 27 AI Act. Either way, the most important thing is to actually run the assessment before you ship the system. A retrospective DPIA is worth less than the time it took to write — and it doesn't immunise you from the fine for not having had one in the first place.

If you haven't yet inventoried your AI systems, the free classifier is the fastest way to figure out which ones need a DPIA. It takes 3 minutes per system and tells you whether you have a Section-35-trigger on your hands.

Share X LinkedIn Email
Complair

Automate what this post explains.

Inventory your AI systems, classify risk, and generate the documents you'd otherwise be writing by hand. 14-day free trial. No credit card.