Skip to main content
DOCUMENTATION · COMPLIANCE

GDPR and AI in 2026:
why on-premise is the safest path

A practical guide for lawyers, Data Protection Officers (DPOs) and compliance officers. GDPR Articles 5/6/9/22/32/35, Schrems II, the EU-US Data Privacy Framework, AI Act 2024/1689, DPIA for on-premise AI, the CNIL approach and the Polish DPA trend in 2024-2026 - all in one place, with verbatim quotes from the legal acts and links to official sources. RODO is the Polish equivalent of GDPR; throughout this article we use the international term GDPR.

Reading time: 22-26 min ~5,200 words
SECTION 1 / 9

GDPR basics for AI: six articles that define the framework

The General Data Protection Regulation (GDPR - known in Poland as RODO; Regulation (EU) 2016/679) entered into force on 25 May 2018 and remains the foundational act for every AI deployment in the European Union. The AI Act of 2024 does not replace the GDPR - it operates alongside it, and where personal data is concerned the GDPR takes precedence. For a compliance officer this means that on-premise AI is not "GDPR-safe by design" simply because the slogan "server in Poland" appears on a slide. It is a defensible GDPR path when the architecture, contracts and controls minimise transfer, telemetry and audit-trail risks. Below are six GDPR articles worth knowing verbatim and by name.

Article 5 - principles of processing

Article 5 GDPR defines seven processing principles: lawfulness, fairness and transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity and confidentiality, and accountability. These are the principles almost every Polish DPA decision relies on. An on-premise architecture supports minimisation (data physically does not leave the organisation), it caps retention (the controller has physical control over the lifecycle of the logs) and it strengthens accountability (the entire audit trail is local, exportable, and signed with a hash).

Article 6 - six legal bases

Article 6 GDPR lists six legal bases for processing of ordinary personal data: consent (point a), contract (point b), legal obligation (point c), protection of vital interests (point d), task carried out in the public interest (point e), and legitimate interest (point f). For private-sector AI the most common bases are contract (e.g. serving an accounting firm's client) or legitimate interest (e.g. an internal compliance assistant), provided that a balancing test (LIA - Legitimate Interest Assessment) has been performed and documented.

Article 9 - special categories of data

Article 9 GDPR prohibits the processing of data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic data, biometric data, data concerning health and data concerning a natural person's sex life or sexual orientation, unless one of the ten exceptions in paragraph 2 applies (including explicit consent, employment law, vital interests, political or religious activity, data manifestly made public by the data subject, legal claims, important public interest, preventive medicine, archiving, and scientific research). For law firms handling medical malpractice cases or for accounting firms running HR services in the medical sector this means that any prompt to an external LLM may contain special categories of data, and the processing itself requires both an Article 6 basis and an Article 9(2) exception. That bar is very high for SaaS AI; it is considerably lower for on-premise.

Article 22 - the right to human intervention

Article 22 GDPR sits at the heart of every discussion about AI and automated decision-making. It establishes the right of the data subject not to be subject to a decision based solely on automated processing, including profiling, that produces legal effects or similarly significantly affects them. The full text (verbatim quote, reused 1:1 from research GAPS section 4.4):

"1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

2. Paragraph 1 shall not apply if the decision: (a) is necessary for entering into, or performance of, a contract between the data subject and a data controller; (b) is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject's rights and freedoms and legitimate interests; or (c) is based on the data subject's explicit consent.

3. In the cases referred to in points (a) and (c) of paragraph 2, the data controller shall implement suitable measures to safeguard the data subject's rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.

4. Decisions referred to in paragraph 2 shall not be based on special categories of personal data referred to in Article 9(1), unless point (a) or (g) of Article 9(2) applies and suitable measures to safeguard the data subject's rights and freedoms and legitimate interests are in place."

Source: Regulation (EU) 2016/679, Article 22(1)-(4). Consolidated text on EUR-Lex.

The practical implications of Article 22 for on-premise AI are very concrete. First, if an AI system recommends a decision (e.g. "approve this invoice because it matches the white list") and the human merely clicks "OK" without a real assessment, a supervisory authority may treat that as an automated decision within the meaning of Article 22. Second, paragraph 4 expressly forbids basing automated decisions on special categories of data (Article 9), unless very narrow conditions are met. Third, on-premise makes the duty of "human intervention" easier to satisfy: the model, the context and the reasoning chain all stay local and interpretable - in contrast to a "black box" sitting in someone else's cloud.

Article 32 - security of processing

Article 32 GDPR requires the controller and the processor to implement "appropriate technical and organisational measures" proportionate to the risk. The Polish DPA practice in 2024-2026 makes it clear that the authority looks at risk analysis as the foundation - the PLN 40,000 fine for SPZOZ Pajęczno (Polish DPA decision, 2024) was imposed precisely for the absence of such an analysis prior to a ransomware attack. By default, on-premise AI shrinks the attack surface: fewer transfers outside the customer environment, no product call-home by default and less dependence on external sub-processors in the chain.

Article 35 - Data Protection Impact Assessment (DPIA)

Article 35 GDPR requires a DPIA when processing - in particular using new technologies - is likely to result in "high risk" to the rights and freedoms of natural persons. Paragraph 3(b) explicitly flags large-scale processing of special categories of data under Article 9 or data relating to criminal convictions under Article 10 as cases that require a DPIA. Deployments of AI in a medical law firm, a labour law firm processing biometric employee data, or an accounting firm serving medical clients almost always require a DPIA. Details follow in section 5.

Mapping on-premise AI onto the six GDPR articles: Article 5 (minimisation - data physically does not leave the premises), Articles 6 and 9 (an internal audit trail makes documenting the legal basis easier), Article 22 (interpretability with citations rather than a black box), Article 32 (control over the security measures themselves), Article 35 (a DPIA that is shorter because it does not need to cover US transfers, Schrems II, the Cloud Act, FISA 702, or sub-processor chains). For more context on the very concept of "private AI" - and how it differs from "local" or "on-premise" - see /en/docs/private-ai-fundamentals.

SECTION 2 / 9

Schrems II and US data transfer: why SCC alone are not enough

On 16 July 2020 the Court of Justice of the European Union (CJEU) handed down its judgment in case C-311/18 (Data Protection Commissioner v Facebook Ireland Ltd, Maximillian Schrems) - a ruling that reshaped the architecture of global compliance. The CJEU struck down the Privacy Shield adequacy decision (the previous EU-US transfer framework) and left the validity of the Standard Contractual Clauses (SCC) as a basis under Article 46 GDPR, but with one fundamental caveat: SCC are valid only if the destination country provides protection "essentially equivalent" to that in the EU and the controller has performed a Transfer Impact Assessment (TIA) and put additional safeguards ("additional measures") in place where the assessment shows gaps.

The most editorially powerful passage of the judgment is paragraph 184. The CJEU writes that the protection afforded by the transfer mechanism must be "actionable in practice" - not just a formal contractual statement, but a real, enforceable path of redress when third-country authorities reach for the data of EU subjects. Verbatim quote:

"The protection afforded by that mechanism must, in practice, be actionable."

Source: CJEU judgment of 16 July 2020, case C-311/18, paragraph 184. CURIA / Court of Justice of the EU.

A short sentence with far-reaching consequences. Every data controller in the EU - be it a Polish law firm, an accounting firm, a hospital, or a municipality - must actually verify whether a US-based SaaS provider is in a position to ensure that US authorities will not reach for client data. A piece of paper is not enough. A Polish enterprise user who pastes a client contract into ChatGPT is, in legal terms, performing a transfer to a third country, and the absence of an adequate "additional measure" exposes the controller to a fine under Article 83 GDPR.

Implications for US-based AI providers

  • OpenAI (San Francisco, USA) - every API request lands on the infrastructure of a US company; even an enterprise contract with EU data residency does not exempt OpenAI from the Cloud Act and FISA 702 regimes.
  • Anthropic (San Francisco, USA) - analogous: a US company under US jurisdiction; access to data for US authorities on the basis of domestic warrants.
  • Microsoft Azure / Google Cloud - partly certified under the EU-US Data Privacy Framework (section 3 below), but even full DPF certification does not switch off US law as it applies to a US provider. The Cloud Act and FISA 702 continue to apply.

From a DPIA perspective the practice is as follows. If a Polish medical law firm uses Microsoft Copilot in Microsoft 365 with data residency in Frankfurt, the Schrems II assessment must answer two questions: (1) can Microsoft Corporation, as a US entity, be the addressee of a Cloud Act warrant for data stored in Germany? (2) are there "additional measures" - including client-side encryption with a key Microsoft does not hold - that practically rule out access by US authorities? Without a positive answer at both levels the transfer remains risky. Many law firms in 2024-2026 chose the simpler path: blocking cloud AI for medical and legal client data, and instead deploying an on-premise model (e.g. BezChmury 11B v3 or PLLuM) with local RAG.

We continue this logic in section 3 - the 2023 DPF partially lowered transfer risk, but only for certified entities and only within the scope expressly covered by the framework. The Cloud Act and FISA 702 remain in the background.

SECTION 3 / 9

DPF 2023, Cloud Act and FISA 702: three pillars of transfer risk

On 10 July 2023 the European Commission adopted Implementing Decision (EU) 2023/1795 finding that the United States ensures an adequate level of protection for personal data transferred to organisations participating in the EU-US Data Privacy Framework (DPF). This decision is the successor to Privacy Shield and - in the Commission's view - addresses the concerns raised in the Schrems II judgment. For a compliance officer this means that a transfer to a certified DPF participant is, prima facie, permissible under Article 45 GDPR without a separate TIA. But - and this is a critical "but" - only for entities expressly listed on the official DPF list, and only within the scope of their certification.

DPF certified providers (status as of 1 May 2026)

The official list of DPF participants is published and updated by the U.S. Department of Commerce at https://www.dataprivacyframework.gov/list. For the most commonly used AI providers, the status as of 1 May 2026 looks as follows (source: research FULL plus direct verification of DPF participant pages):

  • Google LLC - confirmed active DPF certification (dataprivacyframework.gov/participant/5780).
  • Microsoft Corporation - confirmed active DPF certification (dataprivacyframework.gov/participant/6474).
  • Amazon.com, Inc. (covering AWS) - confirmed active DPF certification (dataprivacyframework.gov/participant/5776).
  • OpenAI - DPF certification NOT CONFIRMED as of 1 May 2026 (research FULL: "the official participant entries could not be confirmed"). This is not proof of any absence of a transfer basis - OpenAI uses SCC and may have its own Article 46 GDPR mechanisms - but for publication purposes the status "DPF-certified" must not be assumed automatically.
  • Anthropic - DPF certification NOT CONFIRMED as of 1 May 2026 (analogous to OpenAI; status requires a separate verification per specific organisation and per specific service scope).

A compliance officer contracting an AI service from a US provider should always check the current status on the DPF list - and write the specific entity into the contract (e.g. "Microsoft Corporation, certificate #6474, scope: Cloud Service Provider") together with the scope of services covered by certification. Brand alone ("Microsoft", "Google", "AWS") is not sufficient: a corporate group may include several entities with different statuses.

Cloud Act 2018 and FISA 702 - what DPF does not erase

The Clarifying Lawful Overseas Use of Data Act (Cloud Act) of 2018 authorises US law enforcement to demand access to data held by providers subject to US jurisdiction - even if the server is physically located in the EU. The mechanism is domestic: it relies on warrants issued by US federal courts. A US company, regardless of the data centre region, is the addressee of such a warrant.

The Foreign Intelligence Surveillance Act, section 702 (FISA 702) goes further: it authorises US intelligence agencies to conduct programmatic surveillance of communications of non-US persons located outside the US, where those communications transit the infrastructure of US "electronic communication service providers". FISA 702 was precisely the centrepiece of the CJEU's argument in Schrems II - the Court found that this surveillance does not meet the test of "proportionality" required by the EU Charter of Fundamental Rights.

The practical consequence for AI workloads is as follows:

  1. Microsoft Azure with DPF certification + EU data residency = lowered risk for routine workloads (CRM, ERP, e-mail). DPF removes the need for a separate TIA for administrative transfers.
  2. Microsoft Azure with DPF + EU data residency for AI workloads (e.g. ChatGPT-like inference) = Cloud Act and FISA 702 risk still applies. DPF does not switch off US law as it applies to Microsoft Corporation as a US entity.
  3. On-premise (e.g. BezChmury 11B running locally) = materially reduced transfer surface, because processing can remain in the customer's environment. The Polish law firm still needs a deployment-specific assessment under the GDPR.

The diagram below illustrates how transfer risk shifts after the introduction of DPF (2023) and after the deployment of on-premise (on the client side).

Schrems II flow: transfer risk before DPF, after DPF, and after on-premise BEFORE DPF (07.2020 - 07.2023) PL client controller SCC + TIA SaaS US processor Cloud Act / FISA 702 US authorities RISK HIGH AFTER DPF (from 07.2023, certified) PL client controller Art. 45 (DPF) SaaS US (DPF) e.g. Microsoft, Google Cloud Act remains US authorities RISK MEDIUM ON-PREMISE (BezChmury, local BezChmury 11B) PL client controller no transfer Local model on-premise no default path US authorities LOWER RISK
Diagram 1. Schrems II flow: across three regimes you can see how DPF lowers transfer risk (from red to amber), but only on-premise eliminates it altogether. The Cloud Act and FISA 702 continue to apply to US companies regardless of the data centre's location.

The practical conclusion of section 3: DPF + EU data residency lowers risk but does not eliminate it. On-premise delivers a hard compliance posture without the need to actively manage "additional measures" and without the constant monitoring of the DPF list (which can evolve - for instance if NOYB successfully challenges DPF before the CJEU in a "Schrems III" case, on which we comment cautiously because the timeline cannot be predicted reliably).

SECTION 4 / 9

AI Act 2024/1689 - timeline and what it means for on-premise AI

Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 (the "AI Act") was published in the Official Journal of the EU, L series, on 12 July 2024 and entered into force on 1 August 2024. The AI Act is the world's first horizontal legal act regulating artificial intelligence - in that sense it has global significance. The purpose of the regulation is set out in Recital 1:

"The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union… while ensuring a high level of protection of health, safety, fundamental rights… and to support innovation."

Source: Regulation (EU) 2024/1689, Recital 1. EUR-Lex, accessed: 1 May 2026.

For a compliance officer, Recital 1 is a clear signal: the AI Act is not an act prohibiting AI. It is an act that organises the market, protects fundamental rights and supports innovation at the same time. The same recital can be cited by the deployer as a justification for the project ("the AI Act allows our model if we meet the requirements") and by the client's lawyer ("the AI Act requires you to deploy this safely").

Definition of an "AI system" - Article 3(1)

"'AI system' means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."

Source: Regulation (EU) 2024/1689, Article 3(1). EUR-Lex, accessed: 1 May 2026.

The definition is deliberately broad. It covers both classical ML (a decision-tree classifier) and LLMs (Bielik, GPT, Claude). Three elements are key: (1) machine-based, (2) varying levels of autonomy, (3) inferring from input - generating output. A KSeF assistant that reads an accountant's question and returns a recommendation with a citation to a statute is an "AI system" within the meaning of the AI Act.

Implementation timeline - staged entry into application

The AI Act is designed as a staged act. The schedule:

  • 1 August 2024 - entry into force of the regulation (20 days after publication in the OJ EU).
  • 2 February 2025 - application of the prohibitions in Article 5 (including the ban on manipulation, social scoring and real-time biometric recognition in public space, with specified exceptions).
  • 2 August 2025 - application of the requirements for General Purpose AI (GPAI) models, including the transparency obligation: technical documentation, training data summary, and copyright policies.
  • 2 August 2026 - application of the requirements for most high-risk systems under Annex III (employment, education, access to public services, law enforcement, migration and asylum, the administration of justice).
  • 2 August 2027 - full application of the remaining requirements, including for high-risk systems under Annex I (products already regulated by sectoral law, e.g. medical devices).

For a compliance officer in a Polish law firm or accounting firm, the critical year is 2026: 2 August 2026 is the date from which most high-risk systems under Annex III must be fully compliant. Good news: a typical on-premise KSeF assistant or compliance assistant for an accounting firm is not high-risk. The European Commission's official AI Act Service Desk FAQ makes it clear that classification follows from a specific use case in Annex III, not from the mere fact that AI is being used in a given sector. An "internal document assistant" or "Q&A tool" used by an accountant or lawyer does not become high-risk automatically if it does not fit any Annex III category.

What does BezChmury require as an AI system? First, technical documentation - a description of the architecture (BezChmury 11B v3, RAG, local SSoT base of 630 facts), a description of the base model's training data, the intended purpose ("compliance assistant for Polish accounting firms and law firms"), and the limitations (defined scope: KSeF, GDPR, ZUS; out-of-scope domains excluded). Second, transparency to the end user - a notice that the response comes from AI, that it may contain errors, and that every fact carries a source citation. Third, GPAI obligations for the BezChmury 11B v3 model itself (on the SpeakLeash side as the developer). Help for compliance officers is provided by the European Commission's AI Act Service Desk (ai-act-service-desk.ec.europa.eu), which hosts the official FAQ, the AI Act Explorer and contextual help for specific use cases.

Detailed guidance for deploying BezChmury in a client's infrastructure - including audit-ready technical documentation - is available at /en/docs/ksef-bezchmury.

SECTION 5 / 9

DPIA template for on-premise AI: nine sections

A Data Protection Impact Assessment (DPIA) is required under Article 35 GDPR for any processing operation that presents a high risk. The European Data Protection Board (EDPB) in Opinion 28/2024 of December 2024 explicitly states that for AI one must take into account, among other things, the processing of special categories of data, automated decision-making or profiling, data protection by design (privacy by design) and the DPIA itself. Below is a nine-element DPIA template for an on-premise AI deployment, grounded in the structure of Article 35(7) GDPR and EDPB / Polish DPA guidance.

  1. Processing purposes. Business description: e.g. "local KSeF assistant for an accounting firm - answers to accountants' and clients' questions about KSeF procedures, error codes and the FA(3) validator, with no client data leaving the premises". The purpose should be specific, measurable, and impossible to extend without updating the DPIA (the purpose limitation principle of Article 5).
  2. Legal basis. Article 6(1)(b) (contract with the firm's client) or 6(1)(f) (legitimate interest - improving service delivery). If special categories of data are processed (e.g. a medical malpractice firm), additionally Article 9(2) - most often (f) (legal claims) or (h) (preventive medicine purposes).
  3. Categories of personal data. Specific fields: accounting data (NIP, address, REGON, contact data), legal data (case content, PESEL, financial data), medical data (clinical documentation, ICD-10 codes, biometric data). For on-premise AI: a statement that data does not leave the client's local infrastructure.
  4. Categories of data subjects. Clients of the accounting firm (natural persons and companies), clients of the law firm (principals, witnesses, parties to proceedings), employees of the controller, cooperating entities (counterparties, experts).
  5. Retention. Storage period - e.g. 5 years for accounting documents (Polish Accounting Act and Tax Ordinance), 50 years for medical records (Polish Patients' Rights Act), 10 years for case files in a law firm. For AI logs: typically 12-24 months, with automatic deletion after expiry.
  6. Technical and organisational measures (Article 32). For on-premise: full encryption at rest (LUKS / FileVault / BitLocker), access control (RBAC, MFA), an audit log (every AI query with content hash, user, timestamp), no external telemetry, no call-home, an isolated network (a VLAN for the AI workload), regular backups with key rotation.
  7. Risk assessment. Likelihood × impact matrix (low/medium/high) for the following scenarios: (a) data leak from the local machine, (b) unauthorised access to AI logs, (c) model error leading to an incorrect recommendation, (d) ransomware attack on the workstation hosting the model, (e) use of the model outside its scope. For on-premise the transfer-related risks (a-d-e in cloud SaaS) are significantly lower or non-existent.
  8. Mitigation measures. Pseudonymisation at the model input (e.g. replacing PESEL with a hash before retrieval), context minimisation (RAG returning only top-K documents), regular audits (quarterly review of AI logs), staff training (how to ask questions without revealing unnecessary data), hallucination tests (a probe set of 100-200 control questions every quarter).
  9. Consultations with the Polish DPA. Required under Article 36 GDPR when the DPIA reveals high risk that cannot be reduced by mitigation. For on-premise AI in a typical deployment (compliance assistant for an accounting firm) consultations are rare, because mitigation measures (locality, no transfer, audit log) effectively bring the risk down.

Comparative DPIA: cloud AI vs on-premise AI

A DPIA for cloud AI must cover additional chapters: an adequacy assessment of the transfer (Schrems II, DPF, SCC, TIA), an inventory of sub-processors (each in a separate table with location and DPF status), Cloud Act / FISA 702 risk, redress mechanisms for data subjects, "additional measures" (including client-side encryption with a customer-held key - Customer Key, BYOK). A DPIA for on-premise AI is shorter - focus on local security, retention, model risk, but without having to weave together a chain of nineteen sub-processors across three jurisdictions.

A sample DPIA for BezChmury KSeF Private - an example excerpt for an accounting firm serving 80 clients - contains the following entries: purpose = "local KSeF and compliance assistant", legal basis = Article 6(1)(b) (contract) plus Article 6(1)(f) (legitimate internal interest), categories of data = accounting data (NIP, REGON, address, contact data, invoice amounts), AI log retention = 24 months, measures = AES-256 disk encryption, RBAC, audit log with a hash of every query, no transfer to the US, no external telemetry. Residual risk after measures: low. Consultation with the Polish DPA: not required. This outline is a starting point, not a finished document - every organisation has its own specifics and its own DPO.

DPIA decision tree: does my AI deployment require an impact assessment? Q1: Special categories of data (Art. 9) on a large scale? YES NO DPIA REQUIRED Art. 35(3)(b) Q2: Automated decision (Art. 22) with legal effect? YES NO DPIA REQUIRED Art. 35(3)(a) Q3: Other high risk / new technology? YES NO DPIA REQUIRED / RECOMMENDED screening + documentation DPIA NOT REQUIRED keep a register
Diagram 2. DPIA decision tree for an AI deployment. Three key questions: (Q1) is special category data being processed on a large scale? (Q2) does the system make automated decisions with legal or similarly significant effects? (Q3) are there other high-risk factors, e.g. new technology, public monitoring, or children's data? A "yes" to any question = DPIA required or strongly recommended.
SECTION 6 / 9

CNIL: a French approach to generative AI as a benchmark

The Commission nationale de l'informatique et des libertés (CNIL) is the French data protection regulator - the counterpart of the Polish DPA (UODO). Since 2024 CNIL has been running an action plan for generative AI that has become one of the most important soft interpretive sources for European compliance. CNIL does not ban AI; it requires "responsible deployment" - a deployment grounded in a documented risk assessment.

Five CNIL priorities in the AI area (based on the official action plan and guidance):

  • Data protection - minimisation, legal basis, and transparency to individuals whose data is used for training or inference.
  • Transparency - disclosure that the response comes from AI, a description of the intended purpose and limitations.
  • Fair access - access for everyone, without excessive concentration with a single provider.
  • Human oversight - for material decisions a human must have a real ability to intervene (in the spirit of Article 22 GDPR).
  • Privacy by design - data protection built into the architecture, not bolted on afterwards.

"This guide will therefore be relevant for some of the data processing necessary for the design of AI systems, including generative AIs."

Source: CNIL, Artificial Intelligence Action Plan. cnil.fr/en/artificial-intelligence-action-plan-cnil, accessed: 1 May 2026.

CNIL has also published five Q&As on generative AI that are the most practical compilation of the EU interpretive direction. The most important answers: (1) yes, generative AI may process personal data, but it requires a DPIA and transparency; (2) consent is not always required - legitimate interest (Article 6(1)(f)) is often sufficient if it has passed a balancing test; (3) training data may be personal data and requires assessment - in particular the scraping of public data does not exempt anyone from GDPR; (4) output data may contain personal data and must be treated analogously to input data; (5) on-premise and local models reduce the risk surface but do not exempt anyone from a DPIA.

The practical lesson for Polish compliance: the Polish DPA is likely to follow a similar path. Polish decisions in 2024-2026 (section 7) show that the authority focuses on risk analysis, mitigation measures and documentation, rather than on banning specific technologies. A compliance officer who keeps up with the current CNIL writing has a sound basis for designing a DPIA - including for on-premise deployments.

SECTION 7 / 9

Polish DPA trend 2024-2026: fines and the focus on risk analysis

The Polish DPA (UODO - Urząd Ochrony Danych Osobowych) shifted the emphasis of its decisions in 2024-2026 from formal information-duty defects towards risk analysis, appropriate technical and organisational measures (Article 32), and breach notification deadlines (Articles 33 and 34). Four concrete cases worth knowing:

PLN 1.5M
Medical entity (anonymised)

Polish DPA notice of 13 August 2024. A hacker attack covered a wide range of patient and employee data: health data, PESEL numbers, bank accounts, identity documents and passwords. The Polish DPA found that security measures were insufficient (Article 32) and the risk analysis was deficient. This is the largest Polish DPA fine of 2024 against a private entity.

PLN 40,000
SPZOZ Pajęczno

Ransomware attack. The Polish DPA found that the controller had not performed the risk analysis required before implementing security measures (Articles 24, 32, 34). The decision is an important precedent for the public sector - it shows that the Polish DPA looks at process, not only at the outcome of an incident.

PLN 29,648
Wrześnie County Hospital

Late breach notification (Article 33: 72 hours from discovery) and delayed notification of the data subjects (Article 34). The case demonstrates that deadlines are hard - the formal 72 hours has operational weight.

PLN 20,000
Police Sanitary Inspector (Sanepid)

Loss of a USB stick containing personal data (including health data). The Polish DPA found a lack of appropriate safeguards on portable media and a missing risk analysis (Articles 5(1)(f), 5(2), 32). A trivial event with very real financial consequences.

Decision DKN.5131.3.2025 - risk analysis as a mandatory requirement

In decision DKN.5131.3.2025 the Polish DPA required the controller to state whether it had performed the risk analysis necessary to determine whether the incident resulted in a breach requiring notification to the authority and to the data subjects. This is a directional signal: the regulator does not ask about fashionable AI labels; it asks for a documented procedure. A compliance officer deploying on-premise AI has a real advantage in this context - the local architecture simplifies the documentation (fewer sub-processors, no transfer, local log retention), which translates directly into a higher-quality DPIA.

A note on maximum fines: Article 102(1) of the Polish Personal Data Protection Act caps the maximum administrative fine for public-finance entities listed in Article 9(1)-(12) and (14) of the Public Finance Act at PLN 100,000, and for entities in Article 9(13) at PLN 10,000. For private entities the ceiling under Article 83 GDPR applies - up to EUR 20 million or 4% of annual turnover, whichever is higher.

Practical conclusions for an AI deployment: every implementation in the medical sector or on special categories of data requires a DPIA + a risk analysis + documented measures. On-premise reduces the risk surface by default but does not exempt anyone from the documentation. A case study of a medical law firm that deployed BezChmury in response to a Polish DPA audit is described in /en/case-studies/law-firm-marek.

FAQ

FAQ for compliance officers: twelve questions and answers

Is on-premise AI GDPR-compliant?
Yes - it eliminates roughly 80% of GDPR risks (mainly the US data transfer problem under Schrems II). It still requires a DPIA under Article 35 GDPR (shorter than for cloud AI - focus on local security). Full schema: DPIA template in 7 steps. Trigger: Polish DPA fines 2024-2026 plus decision DKN.5131.3.2025.
What is Schrems II?
CJEU judgment C-311/18 of 16 July 2020 invalidating Privacy Shield. Standard Contractual Clauses (SCC) require "additional measures" for transfers to the US. Verbatim quote from paragraph 184: "actionable in practice". Implication: every transfer of personal data to a US-based AI provider requires an adequacy assessment and a DPIA. Full context: Schrems II section.
Is ChatGPT GDPR-compliant?
OpenAI ChatGPT is NOT certified under DPF as of 1 May 2026 (Google and Microsoft are; OpenAI and Anthropic are NOT). Every prompt is a transfer to the US. A DPIA, an adequacy assessment, and client consent are required. Most Polish law firms block ChatGPT for medical client data because of the Cloud Act and FISA 702. Alternative: private on-premise AI.
What is the AI Act and when does it apply?
Regulation (EU) 2024/1689 of 13 June 2024. It entered into force on 1 August 2024. Staged timeline: GPAI obligations from 2 August 2025, high-risk Annex III from 2 August 2026, full application 2 August 2027. High-risk list: HR, education, critical infrastructure, medical applications. Verbatim quotes from Recital 1 and Article 3(1) in the full AI Act article.
How do I prepare a DPIA for AI?
7 steps under Article 35 GDPR: (1) processing purposes, (2) legal basis under Articles 6 and 9, (3) categories of data, (4) risk (likelihood × impact), (5) mitigation measures, (6) consultation with the Polish DPA (if risk is high), (7) implementation, monitoring and review. Full HowTo DPIA in the article.
When is a DPIA mandatory?
Required under Article 35 GDPR when risk is "high" - e.g. medical data, credit scoring, employee monitoring, automated profiling. Trigger: Polish DPA decision DKN.5131.3.2025 (risk analysis as a mandatory requirement). Every AI deployment in the medical sector entails a mandatory DPIA. See law firm Marek case study.
What is DPF and does it shield against Schrems II?
Data Privacy Framework (EU decision 2023/1795 of 10 July 2023) - adequacy of US transfers for certified providers. Google and Microsoft are confirmed. OpenAI and Anthropic are NOT confirmed as of 1 May 2026. Plus the Cloud Act 2018 and FISA 702 = a "Schrems III" scenario remains plausible. Full context in the DPF section.
Can I use Microsoft Copilot for GDPR-relevant documents?
Microsoft Azure with DPF certification is a lower risk than ChatGPT, BUT the Cloud Act remains (US authorities can still request access even if the server is in the EU). For high data sensitivity (medical, client) on-premise stays preferred. A DPIA audit is required before deploying Copilot. Alternative: BezChmury 11B on-premise.
How much does a GDPR audit for AI cost?
It depends on scale. Small law firm: PLN 5,000-15,000. Medical entity: PLN 15,000-50,000. Enterprise (multi-jurisdiction): PLN 50,000-200,000. BezChmury Lite (PLN 149/month per seat) is a supporting tool, NOT a substitute for an auditor. Marek case study: AI audits for clients = a new niche.
What is the Cloud Act?
US Clarifying Lawful Overseas Use of Data Act (2018). US authorities can compel access to data held by US providers, EVEN when the server is physically located in the EU. Combined with FISA 702 = central risk for US-based AI vendors. On-premise materially reduces transfer exposure when processing remains in the customer environment. Full context: Cloud Act section.
Does the Polish DPA fine for incorrect AI?
Polish DPA trend 2024-2026: stronger emphasis on DPIA and risk analysis. Concrete cases: PLN 1.5M (medical entity, 13 August 2024), PLN 40,000 (SPZOZ Pajęczno, ransomware, missing risk analysis), PLN 29,648 (Wrześnie Hospital, late breach notification), PLN 20,000 (Police Sanitary Inspector, lost USB). Decision DKN.5131.3.2025 underlines risk analysis as a mandatory requirement. Full overview: Polish DPA trend.
Does CNIL ban generative AI?
No. The CNIL action plan from 2024 onwards ("On the deployment of AI systems") frames the goal as "responsible deployment", not a ban. Five priorities: data protection, transparency, fair access, human oversight, privacy-by-design. The Polish DPA is likely to follow a similar path. Full overview: CNIL action plan.
SECTION 9 / 9 · WHAT NEXT

A practical step: 15-minute demo, DPIA template, GDPR compliance flow

If this article surfaced gaps in your AI deployment or you need a concrete DPIA for on-premise, the fastest path is a 15-minute demo of BezChmury KSeF Private. We will show: (1) the local BezChmury 11B v3 model running with no internet connection, (2) a sample DPIA for an accounting firm and a law firm, (3) the GDPR compliance flow with the audit log and an export for the auditor, (4) integration with your existing ERP / KSeF / white list.

About the author

Dominik Witanowski - building BezChmury since 2024. Ten years in IT, a focus on source-backed scoring and private RAG for compliance in the Polish accounting, legal and medical sectors. Since 2026 responsible for the BezChmury product family (KSeF Private, Accountant Private, GDPR Private, Enterprise On-Prem), built around the local BezChmury 11B v3 model (SpeakLeash + ACK Cyfronet AGH) and an SSoT base of 630 verified facts. Deployments use an on-premise architecture, GDPR-aware controls and minimisation of transfers outside the customer environment.

Disclaimer. This article is informational material and does not constitute legal advice. Compliance decisions concerning on-premise AI, DPIAs, transfers to third countries and the classification of AI systems within the meaning of the AI Act should be taken in consultation with a DPO, legal counsel or tax adviser appropriate to your organisation. All quotes from legal acts come from official sources (EUR-Lex / CURIA / Polish DPA / CNIL); in case of any editorial doubt please verify against the direct text of the document.

LISTA BETA · ZNIŻKA 30% PRZED LAUNCH

Bądź pierwszy gdy ruszamy w Q3 2026

Dołącz do listy beta – ekskluzywny krąg wczesnych testerów BezChmury. Co 2 tygodnie wysyłam dziennik dewelopera: co buduję, co łamie, co decyduję.

SŁOWNIK POJĘĆ

Słownik pojęć użytych w tym artykule

GDPR (RODO)
Regulation 2016/679 - protection of personal data in the EU. Key articles: 5 (principles), 6 (legal bases), 9 (special categories), 22 (automated decisions), 32 (security), 35 (DPIA). RODO is the Polish equivalent of GDPR.
DPIA
Data Protection Impact Assessment - assessment of the impact of processing on data protection. Required under Article 35 GDPR for 'high risk' processing.
Schrems II
CJEU judgment C-311/18 of 16 July 2020 invalidating Privacy Shield. SCC require 'additional measures' for transfers to the US.
DPF
Data Privacy Framework - adequacy decision (EU) 2023/1795 of 10 July 2023 for transfers to the US. Google and Microsoft are confirmed; OpenAI and Anthropic are NOT confirmed as of 1 May 2026.
Cloud Act
US Clarifying Lawful Overseas Use of Data Act 2018. US authorities can demand access to data held by US providers, even when the server is in the EU.
FISA 702
US Foreign Intelligence Surveillance Act, section 702 - programmatic surveillance for non-US persons. Schrems II was based, among other things, on this.
AI Act
Regulation (EU) 2024/1689 of 13 June 2024. The first European AI regulator. GPAI obligations from 2 August 2025; high-risk Annex III from 2 August 2026.
Annex III
List of high-risk AI systems in the AI Act: HR, education, critical infrastructure, the administration of justice, democratic processes, medical applications.
GPAI
General-Purpose AI Models - AI Act category for foundation models (GPT-4, BezChmury 11B, Llama). Transparency obligations from 2 August 2025.
CNIL
Commission nationale de l'informatique et des libertés - the French data regulator. Action plan for generative AI from 2024 onwards: 'responsible deployment' (NOT a ban).
Polish DPA (UODO)
Urząd Ochrony Danych Osobowych - the Polish data protection regulator. Trend 2024-2026: stronger emphasis on risk analysis (decision DKN.5131.3.2025).
On-premise
Deployment of a system on the client's / company's hardware, WITHOUT sending data to external servers. Antonym: cloud / SaaS.

ŹRÓDŁA

Oficjalne źródła i odniesienia

  1. [1]
    CJEU judgment C-311/18 of 16 July 2020 (Schrems II) - CURIA https://curia.europa.eu/juris/liste.jsf?num=C-311/18 · dostęp: 2026-05-01
  2. [2]
    Adequacy Decision (EU) 2023/1795 (DPF) - EUR-Lex https://eur-lex.europa.eu/eli/dec_impl/2023/1795/oj · dostęp: 2026-05-01
  3. [3]
    Regulation (EU) 2024/1689 (AI Act) - EUR-Lex https://eur-lex.europa.eu/eli/reg/2024/1689/oj · dostęp: 2026-05-01
  4. [4]
    GDPR Article 22 (automated decisions) - EUR-Lex https://eur-lex.europa.eu/eli/reg/2016/679/oj · dostęp: 2026-05-01
  5. [5]
    DPF participant list - U.S. Department of Commerce https://www.dataprivacyframework.gov/list · dostęp: 2026-05-01
  6. [6]
    CNIL action plan for generative AI - CNIL (Commission nationale de l'informatique et des libertés) https://www.cnil.fr/en/artificial-intelligence-action-plan-cnil · dostęp: 2026-05-01
  7. [7]
    Polish DPA notice 13 August 2024 (PLN 1.5M fine, medical entity) - Polish DPA (UODO) https://uodo.gov.pl · dostęp: 2026-05-01
  8. [8]
    Polish DPA decision DKN.5131.3.2025 (risk analysis) - Polish DPA (UODO) https://orzeczenia.uodo.gov.pl · dostęp: 2026-05-01

Wszystkie cytaty dosłowne w artykule pochodzą z powyższych oficjalnych źródeł. Inline odniesienia oznaczone [N] linkują do tej listy.

Want to see private AI
for your business?

A short KSeF Private demo (15 min). We will show local execution, control questions, source base and how BezChmury reduces the risk of hallucinations.

Join the beta listor book a demo →