complair.
Article 12 · Article 26(6)

Runtime logs for high-risk AI systems — three ways to get it right .

Article 12 and Article 26(6) require automatic logs during the system's lifetime, kept for at least six months. Complair tracks the obligation; the logs themselves live in your own infrastructure. Here's how to set that up without paying for a dedicated SKU.

Art. 12 Art. 26(6) Art. 73 Last updated 17 de abril de 2026
What the law says

The text, in plain English.

Two Articles, two audiences. If you build a high-risk system you owe Article 12; if you deploy one you owe Article 26(6). Either way, logs are the evidence.

Provider

Article 12 — Providers

  • High-risk AI systems must technically allow automatic recording of events ('logs') over the lifetime of the system. (Art. 12(1))
  • Logging capabilities must enable traceability appropriate to the intended purpose — what ran, when, on what input, with what outcome. (Art. 12(2))
  • For the specific high-risk systems in Annex III point 1(a), the logs must at minimum cover period of use, reference database, input data matching a hit, and identification of the natural persons involved in verifying results. (Art. 12(3))
  • Logs must be accessible so providers and deployers can monitor the system's operation post-market. (Art. 12(2))
Deployer

Article 26(6) — Deployers

  • Deployers shall keep the logs automatically generated by the high-risk AI system, to the extent such logs are under their control. (Art. 26(6))
  • Logs must be retained for a period appropriate to the intended purpose of the system, and at least six months, unless Union or national law provides otherwise. (Art. 26(6))
  • Logs are the evidentiary basis for the monitoring obligation in Art. 26(5) and for serious-incident reporting in Art. 73 — if a regulator asks what happened, the logs are the answer.
  • If a deployer is a financial-services institution already subject to record-keeping rules under Union financial law, those rules are presumed to satisfy this obligation. (Art. 26(6), 2nd sub-paragraph)
Pattern A · Recommended default

Dump to your own bucket.

Shortest path, cheapest to operate, zero third-party risk. A small wrapper around every LLM call writes a JSON record to S3 (or any S3-compatible store: Cloudflare R2, Backblaze B2, MinIO). Lifecycle rules handle retention.

app/services/ai_logger.rb
# Wrap every LLM call so every inference is auditable.
require "aws-sdk-s3"
require "securerandom"

class AiLogger
  BUCKET         = ENV.fetch("AI_LOGS_BUCKET")
  RETENTION_DAYS = 200 # > 6 months + headroom

  def self.record(system:, input:, output:, user_id: nil, metadata: {})
    id  = SecureRandom.uuid
    key = "ai-logs/#{system}/#{Time.now.utc.strftime('%Y/%m/%d')}/#{id}.json"

    payload = {
      id:        id,
      system:    system,
      timestamp: Time.now.utc.iso8601,
      user_id:   user_id,
      input:     input,
      output:    output,
      metadata:  metadata
    }

    Aws::S3::Client.new.put_object(
      bucket:       BUCKET,
      key:          key,
      body:         JSON.dump(payload),
      content_type: "application/json"
    )
  end
end
Production checklist
  • Set the bucket's lifecycle policy to 200 days (the 6-month floor plus headroom for end-of-month rollovers).
  • Enable object versioning so an accidental delete doesn't lose evidence mid-retention window.
  • Turn on Object Lock (S3 Compliance mode or R2's equivalent) if you want tamper-evident storage that even your own admins can't overwrite.
  • Enable CloudTrail (AWS) or R2 audit logs for the bucket itself — so you can prove the logs weren't tampered with.
  • S3-compatible providers like Cloudflare R2 and Backblaze B2 work with the exact same SDK and are roughly an order of magnitude cheaper than AWS S3 at this scale.
Pattern B · Open-source observability

Langfuse.

Langfuse is an open-source LLM observability platform that captures exactly the events Article 12 asks for — prompt, completion, tokens, latency, trace, user, metadata. A Langfuse Cloud EU region exists if you don't want to self-host. Drop-in SDKs for Python, JS, and most frameworks.

Complair has no commercial relationship with Langfuse. We recommend it because it's the shortest path to Article 12 coverage for teams who don't want to roll their own and don't want a proxy in the hot path.

Pattern C · Hosted proxy

Helicone.

Helicone sits as a proxy between your app and the model provider. Lowest integration effort — change the base URL, get logs. The tradeoff: every prompt and completion transits a third party, so the DPA, sub-processor disclosures, and data-residency questions sit squarely on the critical path.

Complair has no commercial relationship with Helicone. Pick this if integration speed is the dominant constraint; pick A or B if the DPA or data-residency posture is.

How Complair fits in

We track the obligation. Your infrastructure holds the logs.

We aren't building observability — there are already two good answers above. Complair's job is to make sure the obligation shows up on the right checklist for the right system, with the right deadline, and that you can prove it's handled.

  • Classification identifies which of your systems fall under high-risk Annex III — so Article 12 and Article 26(6) only apply where they actually have to.
  • The checklist generator creates a 'Secure log retention' item for every high-risk deployer system, with the 6-month Article 26(6) rule baked in.
  • The evidence vault is where you upload your retention policy, sample log snapshots, and — if you use a third-party tool — the vendor DPA, so auditors have one place to look.
FAQ

The questions we get.

Do I need all three patterns?

No. Pick one. They all satisfy the same obligation. Start with Pattern A unless you already have an LLM observability tool — then use that one.

Does Complair store our runtime logs?

No, by design. The obligation is to retain your logs from your AI systems — not to hand them to another vendor. Complair tracks that you're doing it and holds the evidence.

What counts as a 'serious incident' in those logs?

Article 3(49) defines it as death, serious harm to health, serious and irreversible disruption of critical infrastructure, breach of fundamental-rights obligations, or serious harm to property or environment. When one occurs, the logs become the primary record for the Article 73 report.

Is six months always enough?

It's the Article 26(6) floor, not a ceiling. Sectoral rules in finance (MiFID II, DORA), health (MDR), and employment records can require longer — check your sector. When in doubt, keep logs for as long as you keep the decisions they informed.

Ready to ship

Ready to classify your systems
and close the log-retention item?

Start free, classify in minutes, and get a checklist that already knows whether Article 12 applies to each system.