Artificial Intelligence and Human Resources in the EU - Part 2: AI Literacy - Employer AI Literacy Obligations under the EU AI Act
Client Alert | 7 min read | 05.06.26
1. The AI Literacy Obligation under the EU AI Act
1.1. What is the Scope of the AI Literacy Obligation?
The EU AI Act defines ‘AI literacy’ as the skills, knowledge and understanding to enable the informed use and operation of AI systems and increase awareness of the opportunities, risks and possible harm that AI systems may present — with the ultimate purpose being to ensure that staff (and other relevant individuals) are able to take informed decisions in relation to AI, such as how to interpret AI output and decision-making processes and their impact on natural persons.
The legal obligation under the EU AI Act is for employers to “take measures to ensure, to their best extent, a sufficient level of AI literacy”. It requires in particular that the level of AI literacy aligns with the technical knowledge, experience, education, and training of relevant staff as well as the context in which the AI system is used. In short, while the legal compliance threshold appears high, there is also an element of proportionality that can be applied, specifically as organisations must take AI literacy measures “to their best extent”.
1.2. Who Falls in Scope?
The EU AI Act requires all providers and deployers of AI systems to take concrete measures to ensure that their staff — and any other person dealing with AI systems on their behalf — have a sufficient level of AI literacy. The obligation is not limited to employees; it extends to contractors and service providers, to the extent they are involved in the operation or use of the AI systems concerned.
The obligation extends beyond employees alone: The notion of "persons dealing with AI systems on behalf of" the deployer also covers external contractors, service providers and customers, to the extent that they are involved in the operation or deployment of an AI system. In practice, people working for a service provider or contractor need to have the appropriate AI skills to fulfil the task in question. For instance:
-
- Staff employed by a customer call centre engaged by organisation X, that operates organisation X’s AI-based call triaging tool on organisation X’s behalf, must be trained on how to use the AI-based triaging tool, what the risks are, and how to mitigate them (e.g. appropriate human review).
- Staff employed by a customer of organisation X, who purchased and is now deploying organisation X’s AI-based CV screening tool, must understand how to use the AI CV screening tool, what the risks are (bias in screening), and how to mitigate them.
1.3. How Can Employers Comply?
First and foremost, it is important to mention that there is no one-size-fits-all approach. Although the EU AI Act in Article 4 lays down the ground principles for compliance, it is not specific nor concrete in terms of how organisations can comply with it concretely, or what compliance might look like.
The AI Office, tasked under the EU AI Act with coordinating and enforcing AI policy, has issued some guidance to aid interpretation of the AI literacy principle in the form of a (non-binding) Q&A as well as a webinar and a living AI literacy repository (where AI literacy practices adopted by other organisations of various sizes and industries can be consulted). Importantly, the AI Office does not suggest or impose any formalistic requirements, mandatory training formats or certification. Instead, it calls for the most appropriate measures based on each target group's level and type of knowledge, as well as the context and purpose of the AI systems used. Given differences between AI systems and varying levels of knowledge and experience, different levels of training or learning approaches might be appropriate. However, although AI literacy does not necessarily require training, simply referring staff to an AI system’s instructions for use may be ineffective and insufficient in particular where those instructions are drafted in technical terms that are not accessible to all staff members.
The AI Office has identified four key steps in building an AI literacy program:
(a) Ensure a general understanding of AI within the organisation: What is AI? How does it work? What AI systems are used in the organisation? What are their opportunities and risks?
(b) Clarify the role of the organisation: Is it a provider of AI systems or simply a deployer of solutions developed by third parties?
(c) Consider the risk level of the AI systems provided or deployed: What do employees need to know when dealing with such systems? What risks must they be aware of, and must they know about mitigation measures?
(d) Build AI literacy actions based on this analysis, taking into account differences in technical knowledge, experience, education and training across staff groups, as well as the context in which the AI systems are to be used, including the sector, purpose, and persons on whom the systems are to be used.
The AI Office stresses that these four steps necessarily incorporate legal and ethical dimensions: Understanding of the applicable regulatory framework, particularly the provisions of the AI Act, is encouraged throughout.
In most instances, AI training tailored to staff roles and knowledge levels, followed by a short, documented competency test is likely appropriate. In practice, organisations have implemented AI literacy through a wide range of means beyond formal training, including internal guidance documents and codes of conduct, AI-specific induction sessions, knowledge hubs and online portals, communities of practice, and risk assessment frameworks, as documented in the Commission’s Living Repository on AI Literacy Practices[1].
For documentation and compliance purposes, organisations should make sure to keep an internal record of all training and awareness-raising initiatives they have taken, as evidence of compliance in the event of an inspection.
In relation to the following two common questions/use cases, the AI Office has also helpfully clarified that:
-
- In organisations whose employees are permitted to use generative AI tools (e.g., Large Language Models or “LLMs”) — including publicly accessible, consumer-grade tools — without the organisation being a formal deployer of such tools within the meaning of the EU AI Act, a minimum level of awareness-raising among such employees is still required, wherever professional use is identified. This obligation pertains in particular to the risks inherent to such systems, such as their propensity to generate inaccurate content presented with apparent confidence (the so-called "hallucination" phenomenon); and
- Having employees (or certain employees) with a degree in or prior experience with AI does not automatically exempt an employer from its obligations. The answer depends on the AI system in question and the employee’s specific qualification. More specifically, the organisation must examine whether these employees are familiar with the legal and ethical aspects of AI applicable to their tools, and whether their knowledge remains up-to-date — the swift pace of technological developments could indeed render their prior qualifications obsolete.
2. Enforcement
The supervision and enforcement of the AI literacy obligation under the EU AI Act does not sit with the EU AI Office, but with the national (EU Member State) market surveillance authorities. Although the AI literacy obligation already entered into force on 2 February 2025, it will not be enforced or enforceable until 3 August 2026, i.e., the deadline by which national market surveillance authorities under the EU AI Act must be established by the EU Member States and penalties for non-compliance established. The AI Act itself does not provide for specific penalties in the event of non-compliance with Article 4 but leaves it to Member States to provide for applicable penalties, meaning this can, and likely will, diverge from one EU Member State to another.
Penalties should be effective, proportionate and dissuasive and may include administrative fines, warnings, and non-monetary measures.
The AI Act also lays down a series of factors that should be considered by authorities as appropriate when imposing and deciding on the amount of administrative fines, such as, e.g., the nature and gravity of the infringement, the size, turnover and market share of the operator committing the infringement, the degree of cooperation with the national competent authorities in order to remedy the infringement, and whether it was committed intentionally or negligently (art. 99.7 (a-j). Importantly, the AI Office has noted in its guidance that enforcement action is more likely where there is evidence of an incident attributable to a lack of adequate training or guidance provided to staff or other relevant persons.
It is worth noting that the EU AI Act foresees in an express right for all natural or legal persons to file a complaint with a market surveillance authority, where they have grounds to consider that an AI Act infringement has occurred; meaning investigations can be both initiated at the market surveillance authority’s own initiative or complaint-driven.
As regards private enforcement, the AI Act does not establish a private right of action for individuals or legal persons, but compensation claims can still be brought before national EU Member State courts in accordance with the national civil procedural law. In particular, with the EU Product Liability Directive having been updated to include more express provisions to cover damages caused by AI.
3. Practical Recommendations
Organisations commercialising, developing or deploying AI in the EU should consider:
-
- Mapping all AI systems in use and commercialized within and by the organisation and identifying the individuals — employees, contractors, customers — who operate them, in order to determine the AI literacy measures appropriate to the risk, roles and use cases.
- Adopting a differentiated approach, taking into account the roles and levels of knowledge of each group, the systems used and the context of deployment, including for employees already trained in AI, whose knowledge may rapidly become outdated.
- Documenting all actions taken and relevant decision-making — including training records, awareness-raising initiatives, materials used and choices made in relation to training methodology — in order to demonstrate compliance with Article 4 in the event of an inspection.
Crowell has experience with developing and providing AI literacy training across a range of different industries, sectors and AI knowledge levels, and can assist in developing tailored AI literacy training modules or point employers to relevant AI literacy resources.
For an overview of the full range of topics covered in this alert series, see the first installment: Artificial Intelligence and Human Resources in the EU – A 2026 Legal Overview.
For any questions regarding how the EU AI Act may impact your activities, please do not hesitate to contact our team.
[1] https://digital-strategy.ec.europa.eu/en/library/living-repository-foster-learning-and-exchange-ai-literacy.
Contacts
Insights
Client Alert | 3 min read | 05.14.26
On May 5, 2026, CISA announced CI Fortify — an initiative directing critical infrastructure owners and operators to prepare for geopolitical conflict in which OT networks are actively targeted while communications infrastructure is simultaneously degraded.
Client Alert | 4 min read | 05.14.26
No-Fly Zones for Drones: FAA Proposes New Rules Over Critical Infrastructure
Client Alert | 4 min read | 05.14.26



