With €7.1 billion in cumulative GDPR fines, a landmark CJEU ruling confirming the right to explanation, and the EU AI Act’s high-risk deadlines in August 2026, enterprises deploying AI face an expanding compliance landscape. Here’s what you need to know.
When the GDPR took effect in 2018, most enterprises were still running rule-based automation. Eight years later, AI agents are autonomously processing healthcare claims, screening job applicants, and scoring credit risk — and regulators have spent the intervening years building the enforcement apparatus to hold them accountable. As of January 2026, cumulative GDPR fines have reached €7.1 billion, with approximately €1.2 billion issued in 2025 alone. Data breach notifications surged 22% last year, averaging 443 per day for the first time.
For organizations deploying AI in regulated industries, GDPR compliance isn’t a checkbox exercise. It’s an ongoing architectural requirement that shapes how you collect data, train models, make decisions, and explain them. Here’s what you need to know in 2026.
The GDPR Articles That Matter Most for AI
While the entire regulation applies, seven articles have outsized relevance for AI systems:
- Article 5 (Core Principles) — Data minimization, purpose limitation, and accuracy requirements constrain what data you can collect and how you can use it for AI training
- Article 6 (Lawful Basis) — Every piece of personal data processed by your AI needs a legal justification: consent, contract, legitimate interest, or another lawful basis
- Article 9 (Special Categories) — Health, biometric, genetic, and other sensitive data require explicit consent or specific legal authorization before AI processing
- Articles 13–15 (Transparency & Access) — Individuals have the right to “meaningful information about the logic involved” in automated decisions affecting them
- Article 22 (Automated Decision-Making) — Individuals have the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects
- Article 25 (Data Protection by Design) — Privacy protections must be built into AI systems from the outset, not bolted on after deployment
- Article 35 (Data Protection Impact Assessments) — Most AI systems processing personal data require a DPIA before deployment
Article 22: The Automated Decision-Making Rule
Article 22 is the provision most directly relevant to AI agents. It gives individuals the right not to be subject to decisions based solely on automated processing that produce legal effects (like denying a loan) or similarly significant effects (like rejecting a job application).
Automated decision-making is permitted only when:
- It’s necessary for entering into or performing a contract
- It’s authorized by EU or member state law with appropriate safeguards
- The individual has given explicit consent
Even when one of these exceptions applies, organizations must implement “suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention.”
The landmark CJEU SCHUFA ruling (Case C-634/21) expanded Article 22’s reach significantly. The court held that a credit reference agency creating credit repayment probability scores through automated processing constitutes automated individual decision-making under Article 22 — even when the score is passed to a separate lender who makes the final decision. This means the Article 22 obligations fall on the entity running the algorithm, not just the entity acting on its output.
The Right to Explanation Is Now Real
For years, legal scholars debated whether GDPR truly provides a “right to explanation” for automated decisions. That debate is now settled.
In February 2025, the CJEU issued a landmark ruling in C-203/22 (Dun & Bradstreet Austria) that confirmed organizations must provide data subjects with sufficient information about “the procedure and principles actually applied” in automated decision-making. Crucially, the court ruled that organizations cannot simply invoke trade secrets to deny individuals access to this information. Where disclosure might compromise trade secrets, the matter must be submitted to a supervisory authority or court for balancing — but the right of access cannot be excluded as a rule.
In practical terms, this means your AI system must be able to explain:
- Which personal data were used in the decision
- How the decision was reached (the logic, not the source code)
- Enough detail for the individual to meaningfully challenge the outcome under Article 22(3)
If your AI operates as a black box that can’t articulate its reasoning, you have a compliance problem.
The EU AI Act Changes the Game in August 2026
GDPR no longer stands alone. The EU AI Act introduces a parallel regulatory framework that significantly expands obligations for AI systems. Here’s how the two interact:
Prohibited AI practices already took effect on February 2, 2025, banning manipulative AI, social scoring by public authorities, emotion recognition in workplaces, and real-time biometric identification in public spaces.
High-risk AI system obligations enforce on August 2, 2026. AI used in HR recruitment, credit scoring, healthcare, insurance, and law enforcement must comply with conformity assessments, bias testing, record-keeping, transparency, human oversight, and staff training requirements. Fines reach up to 7% of global revenue — potentially exceeding GDPR penalties.
Most significantly, Article 86 of the AI Act creates a new right to explanation that goes beyond GDPR Article 22. While Article 22 only covers decisions made “solely” by automated means, AI Act Article 86 covers any decision where a high-risk AI system’s output produces legal effects — even when a human is in the loop. This closes the “human-in-the-loop loophole” that many organizations have relied on to avoid Article 22 obligations.
The European Commission’s Digital Omnibus Proposal (published November 2025) aims to streamline GDPR and the AI Act, including a proposed new GDPR article explicitly confirming that legitimate interest is a valid legal basis for AI development — subject to documented balancing tests and opt-out rights. Adoption is expected by mid-2026.
What Regulators Have Said About AI Training Data
One of the thorniest GDPR questions for AI is whether personal data can lawfully be used to train models. Two major regulatory statements in late 2024 and 2025 provide the clearest guidance yet.
The EDPB’s Opinion 28/2024 (December 2024) addressed three critical questions:
- Can AI models trained on personal data be considered anonymous? Not automatically. Anonymization requires a case-by-case assessment evaluating whether individuals can be identified from the model or whether personal data can be extracted through queries.
- Can legitimate interest justify AI training? Yes, subject to three cumulative conditions: identifying a concrete legitimate interest, demonstrating that processing is truly necessary, and ensuring individual rights are not overridden.
- What if training data was collected unlawfully? Unlawful training data could taint the lawfulness of the model’s deployment — unless the model has been duly anonymized.
The French CNIL published detailed guidance in June 2025 confirming that legitimate interest can justify web scraping of publicly accessible data for AI training, subject to strict conditions: defining precise collection criteria in advance, excluding unnecessary data categories, respecting data subjects’ expectations, and conducting a documented balancing test.
LLM-Specific Privacy Risks: What the EDPB Found
In April 2025, the EDPB published a comprehensive report on LLM privacy risks that every organization deploying language models should read. It identifies four categories of risk specific to LLMs:
- Data leakage — Personal data inadvertently included in model outputs
- Memorization — Models retaining verbatim training data that can be extracted through targeted prompts
- Unintentional profiling — Inferring sensitive attributes (health conditions, political opinions) from seemingly innocuous data
- Re-identification — Combining model outputs to identify individuals whose data was used in training
Recommended mitigations include differential privacy, federated learning, synthetic data generation, and systematic testing for memorization and leakage before deployment.
Enforcement Actions That Show Regulators Mean Business
Recent enforcement demonstrates that regulators are actively targeting AI-related GDPR violations:
- OpenAI fined €15 million by Italy’s Garante (December 2024) for lacking a legal basis for training ChatGPT on personal data, failing transparency obligations, and inadequate age verification. OpenAI was also ordered to conduct a six-month public awareness campaign.
- TikTok fined €530 million by the Irish DPC (May 2025) — the first major GDPR enforcement for data transfers to China. TikTok’s Standard Contractual Clauses were found insufficient to protect EEA user data.
- Clearview AI accumulated over €100 million in fines from Dutch, Greek, French, and Italian DPAs for scraping billions of facial images. The UK Upper Tribunal upheld jurisdiction over Clearview in October 2025, rejecting the argument that serving only foreign law enforcement exempted it from data protection law.
- LinkedIn fined €310 million (October 2024) for behavioral analysis violations, then forced to modify its AI training plans after DPC intervention — limiting data volume, improving transparency, and excluding children’s data.
- Meta received DPC approval to train generative AI on EU user data (May 2025), but only after implementing updated transparency notices, extended notice periods, in-app objection forms, data de-identification, and updated risk assessments.
The pattern is clear: regulators are not waiting for the EU AI Act. They’re using GDPR’s existing toolkit aggressively against AI companies.
Data Protection Impact Assessments: Required for Most AI Systems
Article 35 requires a DPIA before processing that is “likely to result in a high risk to the rights and freedoms of natural persons.” AI systems trigger this requirement in virtually every deployment because they typically involve systematic evaluation of personal aspects, processing at scale, and innovative use of new technologies.
Both the UK ICO and the French CNIL have confirmed that AI-powered systems, especially those involving automated decision-making, require a DPIA before deployment. The ICO’s audit of AI recruitment tools (published November 2024) found widespread compliance gaps: some tools collected more data than necessary, including social media scraping. The ICO issued nearly 300 recommendations, all accepted by the audited providers.
The EU AI Act’s Fundamental Rights Impact Assessments (FRIAs) align closely with GDPR DPIAs, so organizations can build on existing DPIA infrastructure rather than creating parallel processes.
A Practical Compliance Framework for AI Under GDPR
Based on the latest regulatory guidance, enforcement trends, and the approaching EU AI Act deadline, here’s a framework for deploying GDPR-compliant AI:
1. Establish Your Legal Basis Before You Build
Determine whether you’re relying on consent, contractual necessity, or legitimate interest for each processing activity. If using legitimate interest for AI training, document the three-part balancing test required by the EDPB: identify the specific interest, demonstrate necessity, and show that individual rights are not overridden. The upcoming Digital Omnibus may codify this, but don’t wait.
2. Build Explainability Into Your Architecture
After the CJEU’s Dun & Bradstreet ruling, the “right to explanation” is real and enforceable. Your AI must be able to articulate which personal data it used and how it reached its decision, in terms a non-technical individual can understand. This isn’t about disclosing algorithms — it’s about enabling individuals to exercise their rights.
3. Implement Human-in-the-Loop for High-Stakes Decisions
Article 22 requires human intervention rights for solely automated decisions with legal effects. But don’t assume a rubber-stamp human review satisfies this. AI Act Article 86 extends explanation rights to decisions where AI contributes to the outcome, even with a human in the loop. The human review must be meaningful and substantive.
4. Conduct DPIAs Before Deployment
Don’t deploy first and assess later. Map the personal data flows, identify risks (including the LLM-specific risks the EDPB identified: leakage, memorization, profiling, re-identification), and document mitigations. Build your DPIA to serve double duty as a Fundamental Rights Impact Assessment for EU AI Act compliance.
5. Create Comprehensive Audit Trails
Every automated decision touching personal data should be logged with: the identity of the system, the data inputs, the reasoning process, the output, a timestamp, and the applicable policy. This serves GDPR’s transparency requirements, satisfies Article 22’s explanation obligations, and prepares you for the EU AI Act’s record-keeping requirements.
6. Design for Data Subject Rights
Individuals have the right to access, rectify, erase, and object to processing of their personal data — including data used by AI systems. Build mechanisms to: respond to subject access requests about AI decisions, correct inaccurate data that fed into decisions, delete personal data from training pipelines when requested, and provide opt-out mechanisms for automated decision-making.
Industry-Specific Considerations
- Healthcare: Health data under Article 9 requires explicit consent or specific legal authorization. AI processing health records for diagnosis or treatment recommendations triggers both GDPR and EU AI Act high-risk requirements.
- Financial Services: Credit scoring AI falls squarely under Article 22 after the SCHUFA ruling. Both the entity running the scoring model and the entity acting on scores have compliance obligations.
- HR & Recruitment: AI recruitment tools are classified as high-risk under the EU AI Act. The ICO’s audit found systemic issues with data minimization and transparency in this sector.
- Insurance: Automated underwriting and claims processing involve profiling under GDPR and fall under EU AI Act high-risk classification for decisions affecting access to essential services.
Building GDPR-Compliant AI Infrastructure
GDPR compliance for AI isn’t a one-time exercise — it’s an ongoing architectural requirement. Every automated decision needs a legal basis, an explanation capability, an audit trail, and a human escalation path.
At Aiqarus, our platform is built with these requirements at the foundation. Cryptographic audit trails log every AI decision with full context — inputs, reasoning, outputs, timestamps — creating the tamper-evident records that regulators require. Bounded autonomy defines exactly what agents can and cannot do, with mandatory human-in-the-loop controls for high-stakes decisions. And transparent reasoning means every decision can be explained in human-understandable terms, satisfying the right to explanation that the CJEU has now confirmed.
With the EU AI Act’s high-risk obligations enforcing in August 2026, the window for building compliant infrastructure is closing. Organizations that start now will be ready. Those that wait will be scrambling.
Aiqarus Team
Building enterprise-grade AI agents for regulated industries.
Ready to Deploy Trustworthy AI?
Deploy AI agents with transparent reasoning and complete audit trails.