What the EU AI Act actually does (in one minute)
The EU AI Act is the world’s first comprehensive AI law. It applies to public and private actors inside and outside the EU whenever an AI system is placed on the EU market or impacts people in the EU. It aims to protect health, safety and fundamental rights while supporting innovation and a functioning single market.
The law uses a risk-based approach with four tiers: unacceptable risk (banned), high risk (strict obligations), specific transparency risk, and minimal risk (no new rules).
Important Disclaimer: The author of this article is a technology researcher, not a lawyer. The information, checklists, and templates provided are for educational and informational purposes only and do not constitute legal advice. They are intended to serve as a starting point for understanding the technical and organizational requirements of the EU AI Act. You must consult with a qualified legal professional to assess your specific situation and ensure your organization’s full compliance with the law.
Table of Contents
1. Who This Guide Is For
This EU AI Act Compliance Checklist is a practical, hands-on resource for the people who build and operate AI systems. It is specifically designed for:
- AI Providers & Developers: Teams building and placing high-risk AI systems on the EU market.
- AI Deployers: Organizations using high-risk AI systems in their operations (e.g., HR, finance, or public services).
- GPAI Model Developers: Teams creating general-purpose or generative AI models who need to understand their transparency and copyright obligations.
- Compliance & Governance Professionals: Anyone tasked with creating an AI governance framework and conducting an AI audit.
2. Key dates you can’t miss

- In force: 1 August 2024.
- Prohibited practices & AI literacy: apply from 2 February 2025.
- GPAI provider obligations (incl. transparency/copyright; systemic-risk duties for very capable models): apply from 2 August 2025.
- Most other rules (general application): 2 August 2026.
- Extended transition for certain high-risk AI that are safety components in regulated products: until 2 August 2027.
GPAI snapshot: Providers of general-purpose AI (including large generative models) have transparency, copyright and (for systemic risk models) risk-mitigation, testing and incident-reporting duties.
3. What Is Banned? A Detailed Look at Unacceptable Risk AI

The EU AI Act takes a firm stance against applications that are deemed to pose a clear threat to the safety, livelihoods, and fundamental rights of people. These practices are classified as creating an “unacceptable risk” and are therefore banned outright within the European Union. Understanding this EU AI Act Compliance Checklist is a critical first step in any compliance journey, as these prohibitions apply from 2 February 2025, ahead of most other provisions.
The Act explicitly bans the following eight practices:
- 1. Cognitive Behavioural Manipulation:
This prohibits AI systems that use subliminal techniques beyond a person’s consciousness or that purposefully manipulate or deceive individuals to materially distort their behavior in a way that is likely to cause physical or psychological harm. This is aimed at preventing the use of AI as a tool for digital puppetry that subverts free will. - 2. Exploitation of Vulnerabilities:
This ban targets AI systems that exploit the vulnerabilities of a specific group of persons due to their age, physical or mental disability, or a specific social or economic situation. The goal is to prevent predatory AI applications, such as an AI-powered toy that encourages dangerous behavior in a child or a system designed to exploit the financial desperation of a person in debt. - 3. General-Purpose Social Scoring:
This prohibits the use of AI for social scoring by both public authorities and private companies. It bans evaluating or classifying the trustworthiness of individuals based on their social behavior or predicted personality traits, especially when this score leads to detrimental treatment in contexts unrelated to where the data was originally collected. - 4. Predictive Policing Based on Profiling:
The Act bans the use of AI to make risk assessments of individuals to predict the risk of them committing a criminal offense based solely on their personality profiling or past criminal behavior. This is designed to prevent “pre-crime” style judgments that could lead to discrimination and violate the presumption of innocence. - 5. Untargeted Scraping of Facial Images:
This prohibits creating or expanding facial recognition databases by indiscriminately scraping facial images from the internet (e.g., social media platforms) or from CCTV footage. This practice is seen as a foundational tool for mass surveillance and a severe violation of privacy rights. - 6. Emotion Recognition in the Workplace and Education:
Given the significant power imbalance in these environments, the Act bans the use of AI to infer the emotions of individuals in workplaces and educational institutions. This prevents scenarios like an employer using AI to monitor employee engagement or a school using it to assess a student’s attention level, which are considered highly intrusive and scientifically unreliable. Narrow exceptions exist for medical or safety reasons, such as monitoring a pilot’s fatigue. - 7. Biometric Categorisation Based on Sensitive Attributes:
This bans AI systems that use biometric data (like a face scan) to deduce or infer an individual’s sensitive personal information. This includes inferring their race, political opinions, trade union membership, religious beliefs, or sexual orientation, directly combating the risk of automated discrimination. - 8. Real-time Remote Biometric Identification by Law Enforcement:
This is a general ban on the use of “live” facial recognition in publicly accessible spaces for law enforcement purposes. The ban comes with a very narrow and strictly regulated set of exceptions for high-stakes situations, such as searching for a missing child, preventing an imminent terrorist threat, or identifying a suspect of a serious crime. Any such use requires prior judicial
4. High-risk AI: examples and core duties
Examples include AI safety components in critical infrastructure, education, employment, access to essential services (e.g., credit scoring), certain biometric uses, law enforcement, migration/border control, justice and democratic processes.
Before placing a system on the market, providers must perform a conformity assessment and prove compliance with a core set of requirements. This includes:
- Risk & Quality Management Systems
- Data Governance & Quality Controls
- Technical Documentation & Logging
- Transparency & Human Oversight
- Accuracy, Robustness & Cybersecurity
This assessment must be updated if the system is substantially modified.
Standards help: Harmonised standards being developed by CEN/CENELEC (deadline April 2025) will, once cited in the Official Journal, confer a presumption of conformity if followed.
5. The EU AI Act Compliance Checklist (do these in order)
1) Determine your risk category
Map every AI system’s intended purpose to the Act’s risk tiers and Annex III uses. This classification drives all downstream obligations.
2) Build the Section-2 technical baseline (high-risk)
For high-risk systems, implement the Articles 9–15 controls:
- Risk management system: establish, implement, document and maintain a continuous risk process.
- Data governance & quality: training/validation/test data must be relevant, representative, as free of errors and complete as far as possible, with appropriate governance.
- Technical documentation: compile/maintain documentation before market placement so authorities can assess compliance.
- Logging/record-keeping: design with capabilities enabling logging and technical means to ensure it.
- Transparency & instructions for use: provide clear information for deployers on system characteristics, performance, and use.
- Human oversight: design to enable effective oversight by natural persons, including override/stop mechanisms.
- Accuracy, robustness, cybersecurity: achieve levels appropriate for intended purpose and manage errors/attack resilience.
3) Provider obligations (organizational)
Providers must operate a quality/risk management system, complete conformity assessment(s), and (for most Annex III systems) register in the EU database before market placement; public deployers must also register their use.
4) Deployer obligations (using high-risk AI)
Deployers have their own duties, including ensuring competent use and (in certain cases) performing a Fundamental Rights Impact Assessment (FRIA).
- Who must do a FRIA?
Public authorities/Union bodies (or those acting on their behalf) before using high-risk AI, and deployers providing public or essential services (e.g., utilities, healthcare access) must conduct a FRIA prior to use.
5) Transparency for chatbots, deepfakes and generative AI
- People should be told when they are interacting with an AI system (unless obvious).
- Providers of systems (including GPAI) that generate synthetic audio/image/video/text must mark outputs in a machine-readable, detectable way (e.g., watermark/metadata/cryptographic provenance), subject to limited exceptions.
- Deployers must visibly disclose deepfakes and (if publishing newsy text) disclose AI-generated/manipulated text unless it underwent human editorial control.
6. Engineer-ready templates (View on GitHub)
These templates operationalize the core controls from Articles 9–15. To make them easy to use, you can view, copy, or download all the templates from our official EU AI Act Compliance Templates repository on GitHub.
Adapt the following examples to your specific stack and security posture.
A) Data Governance Policy (YAML): aligns with Article 10
yaml# data_governance_policy.yaml
version: 1.1
owner: ai-governance@company.com
scope:
ai_system: "credit-scoring-hris-v2"
intended_purpose: "Assess creditworthiness for consumer loans in the EU"
datasets:
- name: training_main
purpose: "Model training"
pii: true
lawful_basis: "Contract + legitimate interests"
provenance:
- "internal_core_db"
- "bureau_scores_v2024_12"
representativeness_check:
methods: ["KS-test", "demographic parity by protected groups"]
schedule: "quarterly"
quality_controls:
- "null/duplicate/outlier rules"
- "label leakage scan"
- "bias screen across protected attributes"
retention:
policy: "5y or shorter if purpose achieved"
deletion_procedure: "data-lifecycle-runbook#L42"
- name: validation_holdout
purpose: "Model validation"
pii: true
stratified_sampling: true
- name: test_online
purpose: "Post-market performance verification"
pii: pseudonymised
governance:
roles:
dpo: "dpo@company.com"
data_steward: "ml-data@company.com"
model_owner: "risk-ml@company.com"
approvals_required:
- "GDPR DPIA link (if applicable)"
- "FRIA sign-off (if applicable)"
- "Security review"
documentation:
model_card: "s3://ai-docs/model-cards/credit-scoring-hris-v2/1.3/README.md"
datasheets: "s3://ai-docs/datasheets/*"
logging_traceability:
enabled: true
schema_ref: "schemas/ai_decision_event.schema.json"
monitoring:
bias:
metrics: ["SPD","EOD","AUC_by_group"]
alerting: "pagerduty:ai-risk"
drift:
metrics: ["PSI","KL"]
threshold_policy: "risk-policies/drift-thresholds.md"
robustness:
tests: ["adversarial/perturbation suite"]
security:
supply_chain: ["SBOM required", "sign artifacts", "reproducible builds"]
access_controls: ["least-privilege", "break-glass procedure"]
(Checks map to Article 10 requirements on data relevance, representativeness, quality and governance.)
B) Model Card Registry (Terraform on AWS): supports Articles 11–12
hcl# main.tf
provider "aws" { region = var.region }
# Immutable, versioned storage for model cards & technical docs
resource "aws_s3_bucket" "model_cards" {
bucket = "ai-model-cards-prod"
force_destroy = false
}
resource "aws_s3_bucket_versioning" "model_cards_ver" {
bucket = aws_s3_bucket.model_cards.id
versioning_configuration { status = "Enabled" }
}
# Index of models and versions (for audits)
resource "aws_dynamodb_table" "model_index" {
name = "ai-model-index"
billing_mode = "PAY_PER_REQUEST"
hash_key = "model_id"
range_key = "version"
attribute { name = "model_id" type = "S" }
attribute { name = "version" type = "S" }
}
# CI output example: path to the current model card
output "model_card_path" {
value = "s3://${aws_s3_bucket.model_cards.bucket}/credit-scoring-hris-v2/1.3/model_card.md"
}
C) Audit-ready Logging Schema (JSON): implements Article 12
json{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"title": "ai_decision_event",
"type": "object",
"required": [
"event_id", "timestamp", "ai_system", "model_version",
"input_fingerprint", "decision", "confidence"
],
"properties": {
"event_id": {"type":"string","description":"UUIDv4"},
"timestamp": {"type":"string","format":"date-time"},
"actor": {"type":"string","description":"system|human|api"},
"ai_system": {"type":"string","description":"system slug"},
"model_version": {"type":"string"},
"dataset_version": {"type":"string"},
"input_fingerprint": {"type":"string","description":"sha256 of normalized input"},
"decision": {"type":"string"},
"confidence": {"type":"number","minimum":0,"maximum":1},
"features_used": {"type":"array","items":{"type":"string"}},
"overrides": {"type":"object","description":"human oversight adjustments"},
"latency_ms": {"type":"integer"},
"errors": {"type":"array","items":{"type":"string"}},
"appeal_id": {"type":"string","description":"link to user appeal/workflow"},
"notes": {"type":"string","maxLength":2000},
"pseudonymous_user_id": {"type":"string"}
}
}
(Enables the “automatic recording of events (logging)” and traceability foreseen by Article 12.)
D) FRIA (Fundamental Rights Impact Assessment) — who must do it & what to capture
Who must complete this before using a high-risk AI system?
• Public authorities/Union bodies, and entities acting on their behalf.
• Private deployers providing public or essential services, and deployers using high-risk AI for creditworthiness or life/health insurance.
javaFRIA ID:
Deployer (org/team):
High-risk AI system (Annex III use):
Intended purpose & context:
1) Rights potentially affected (with evidence):
2) Individuals impacted (incl. vulnerable groups):
3) Legal bases & applicable Union/national laws:
4) Data protection interplay (DPIA links; lawful bases):
5) Risk sources (model, data, deployment, human factors):
6) Mitigations (technical, organisational, human oversight):
7) Residual risks & proportionality/necessity assessment:
8) Stakeholder consultation (if applicable):
9) Monitoring plan (metrics, triggers, audit cadence):
Sign-off (legal, DPO, accountability owner; date):
7. Generative AI & content transparency (what your team must implement)

Provider side (incl. GPAI): Mark synthetic content (audio/image/video/text) in a machine-readable, detectable way (e.g., robust watermark, metadata, cryptographic provenance), chosen to be effective, interoperable, robust and reliable as far as technically feasible. Article 50; applies from 2 Aug 2026.
Deployer side: Label deepfakes visibly; disclose AI-generated/manipulated text when informing the public (unless subject to human editorial control). Article 50; applies from 2 Aug 2026.
Commission Q&A confirms these transparency duties for generative AI outputs.
8. How standards will simplify audits
If you implement harmonised standards cited in the Official Journal, your high-risk system is presumed to conform to covered requirements. If standards lag, the Commission can adopt common specifications that also grant presumption of conformity for the covered parts.
The Commission requested the ESOs (CEN/CENELEC) to deliver the initial standards programme; deadline: end-April 2025 for publication/endorsement pathway.
9. Quick FAQ
What systems are high-risk?
Annex III covers sensitive areas such as education, employment, essential services (e.g., credit), certain biometric uses, law enforcement, migration/border, and justice/democratic processes.
What does a provider need before launch?
Conformity assessment proving compliance with risk management, data governance, technical documentation, logging, transparency/instructions, human oversight, and accuracy/robustness/cybersecurity, and, for most Annex III systems, EU database registration.
Do deployers have to do a FRIA?
Yes, if you’re a public authority/Union body (or acting on their behalf), or if you provide public or essential services, you must conduct a FRIA before use of a high-risk AI system.
When do deepfake labels apply?
Deployers must visibly disclose deepfakes; providers must mark AI-generated content in a machine-readable way; special text-disclosure rules apply for content informing the public.
10. Implementation tips (to ship this quarter)
- Tie your model card and datasheet locations into CI (see Terraform output) so every release drops new docs automatically (Articles 11–12).
- Make your FRIA a gating check in the deployer change-management workflow for high-risk uses.
- Adopt watermark/metadata once at the model or system layer — the Act allows either, as long as outputs are detectable.
11. Legal notes & sources
- This guide summarizes obligations from Regulation (EU) 2024/1689 and the Commission’s official Q&A/briefings. Always read the articles that apply to your specific system and sector.
- The Commission and EU AI Office have also published Q&A and explainer materials clarifying timelines, prohibited practices, GPAI expectations, and standardisation.
12. Final EU AI Act Compliance Checklist you can pin in your tracker
- Map intended purpose → risk tier; check Annex III.
- For high-risk: Articles 9–15 controls implemented (risk, data, docs, logs, transparency, oversight, accuracy/robustness/cybersecurity).
- Provider: QMS + conformity assessment completed; apply harmonised standards/common specs for presumption of conformity; complete required database steps.
- Deployer: responsibilities assigned; FRIA performed where required.
- GPAI: transparency & copyright policies; systemic-risk models meet added testing/incident-reporting/cybersecurity duties.
- Content transparency: Article 50 marking/labelling/disclosure implemented (from 2 Aug 2026).
- Post-market monitoring, incident reporting, and change control ready.
13. From Compliance Hurdle to Competitive Advantage
The EU AI Act is more than a set of rules; it’s a framework for building the next generation of trustworthy AI. By embedding these principles of risk management, transparency, and robust governance into your development lifecycle, you are not just meeting a legal requirement, you are building safer, more reliable products that will earn the trust of the market. Use this checklist as your guide to turn a compliance obligation into a competitive advantage.
1. Does the EU AI Act apply to my company if we are not based in the EU?
Yes, absolutely. The Act has an extraterritorial scope. It applies to any provider who places an AI system on the EU market or puts it into service in the Union, regardless of where the provider is established. It also applies if the output produced by your AI system is used within the EU.
2. What is the difference between an AI “Provider” and a “Deployer”?
A Provider is the entity that develops an AI system (or a general-purpose AI model) and places it on the market under its own name or trademark. A Deployer is a person or organization (like a company using it for HR) that uses a high-risk AI system under its own authority in a professional context. Both have distinct obligations under the Act.
3. Do I need a third-party audit for my high-risk AI system?
It depends. For most high-risk AI systems listed in Annex III (like those in employment or education), the provider can perform a “conformity assessment” based on their own internal controls and technical documentation. However, for AI systems used for remote biometric identification, a third-party conformity assessment by a Notified Body is mandatory.
4. What is the deadline for complying with the AI Act?
The Act has a staggered timeline. The rules on banned “unacceptable risk” AI practices apply from February 2, 2025. Obligations for General-Purpose AI (GPAI) models apply from August 2, 2025. The majority of rules for high-risk AI systems will apply from August 2, 2026.
5. What exactly is a Fundamental Rights Impact Assessment (FRIA)?
A FRIA is a mandatory assessment that certain deployers must conduct before putting a high-risk AI system into use. It requires them to describe how the system will be used, identify the categories of people affected, assess the specific risks to fundamental rights (like non-discrimination or privacy), and detail the measures they will take to mitigate those risks.
6. Are open-source AI models exempt from the AI Act?
Partially, but not entirely. Open-source models are generally exempt from many of the obligations (like extensive documentation requirements), unless they are classified as a General-Purpose AI (GPAI) model with systemic risk or are part of a high-risk AI system. In those cases, they must comply with the relevant rules.
7. How does the AI Act relate to GDPR?
The AI Act and GDPR are designed to work together. The AI Act sets rules for the safety and design of AI systems, while GDPR governs the lawful processing of personal data. If your high-risk AI system processes personal data, you must comply with both regulations. The AI Act reinforces GDPR principles by requiring things like data governance and human oversight.
8. What are the penalties for non-compliance?
The penalties are severe and are based on a percentage of the company’s total worldwide annual turnover from the preceding financial year. Fines can go up to €35 million or 7% of turnover for using banned AI, and up to €15 million or 3% for non-compliance with other high-risk obligations.
9. What is a “General-Purpose AI (GPAI) model with systemic risk”?
This is a special category for the most powerful and widely used GPAI models. A model is presumed to have systemic risk if the cumulative amount of computation used for its training was greater than 10^25 FLOPs. These models have additional obligations, including conducting adversarial testing, ensuring cybersecurity, and reporting serious incidents to the AI Office.
10. What does it mean to “mark” AI-generated content?
Providers of generative AI systems must ensure that synthetic audio, video, image, and text outputs are marked in a machine-readable format to indicate they are artificially generated (e.g., using a watermark or metadata). Deployers then have a transparency obligation to visibly disclose to users that content like deepfakes is artificial. This is designed to combat misinformation and ensure transparency.