AI Literacy: What Businesses Need to Know Now

October 13, 2025

Naomi Foale and Alice Wallbank answer some the questions generated by the literacy requirements set out in the EU AI Act.

What is AI literacy and does it affect our business?
“AI literacy” is a knowledge and training requirement under the EU AI Act which came into effect on 2 February 2025.

Article 4 of the Act demands that businesses using AI systems should use their best efforts to ensure that staff and others operating systems on their behalf, including service providers, contractors and potentially customers, have a sufficient level of AI literacy to make “an informed deployment” of AI systems.

Like much EU law, it has ambitious extra-territorial effect. When the output of an AI system is supplied on a commercial basis and used in the EU (whether this is intended or not), a business is potentially in scope of the rule, regardless of where it is based. Locating staff outside the EU will not in itself make a difference.

How and when will this be enforced?
EU member states need to put in place enforcement mechanisms through national law. The deadline for completion of this was 2 August 2025. The AI Office has said it will be supporting compliance of developers and users “by the end of 2025”. 

The Commission in its FAQs notes that member state enforcement will not begin until 3 August 2026 (the day after mechanisms are established). Such enforcement could in theory look back to when the obligation first arose, up to 18 months previously. But in practice, organisations should be aware that standalone enforcement of the literacy obligation is unlikely to be a priority. It is more likely to be invoked – at least in the early days – where providers and deployers are being investigated for other infringements.

The Act is not primarily designed to engage individual rights of action but there are some indirect routes: customers, users or indeed anyone detecting an infringement (including of Art. 4) has a right to complain to the relevant market surveillance authority (Art. 85).

These are   only intended to be a safety net: established national laws on liability will apply to harms caused by misuse of AI-enabled systems as they do to any other potentially dangerous product. For example, and subject to the relevant laws in the applicable jurisdiction, non-compliance may enable claims for breach of statutory duty or for breach of contract, where compliance is a contract term.

In addition, given the known risks of AI, particularly with regard to ethics, bias and energy use, AI literacy will also be linked to wider corporate responsibility obligations. Overall, it will be hard for an organisation to justify not thinking about staff and supply chain use of AI systems, wherever they are based.

Are there standards or minimum levels?
No. There is no established standard, so businesses are free to approach the requirement as they consider most appropriate. There is no certification system for training providers, at least for the moment. 

At the same time, almost every EU-facing business will have to do something as there is no minimum level of AI engagement to trigger the obligation. As an example, in its February 2025 webinar, the Commission said that “any company using ChatGPT will require AI literacy training.” The FAQs add that this would apply to all staff using LLMs or translation to be used in the EU. 

So where to start? Standards of AI literacy must consider the technical knowledge, experience, education and training of the person in question, and the proposed use of the system. As you would expect, this means that high risk uses will require higher levels of understanding. As the Commission put it, literacy goes “hand in hand” with the other requirements of the Act, so literacy will reflect the nature of the system and deployment in question, and organisations should take a risk-based approach. 

The standards of AI literacy should vary within an organisation to reflect the different business functions and the nature of their use of AI systems. For example, human resources teams using AI in recruitment will need to understand the risk and impact of bias in that context. By contrast, a bank’s fraud detection team may require an understanding of how AI can successfully identify fraudulent activities as well as its blind spots that require human oversight and review. On the other hand, there may be teams which do not interact with AI at all (for example, cleaners or maintenance staff) in which case the standards for AI literacy will be low or non-existent.

The law requires “literacy”, which implies an active and practically useful understanding. Although the duty is a proportionate and tailored one, it requires “best” measures: going through the motions is unlikely to fulfil the brief. The Commission’s view is that “best” does not mean perfection but is likely also to be linked to regularity of engagement. 

The FAQs suggest that AI literacy should include an understanding of the AI Act and principles of ethics and governance. It also notes that AI literacy will support the transparency and human oversight requirements in Articles 13 and 14 of the Act. 

There is no documentation requirement as such, although this will be an important demonstration of accountability. 

Opportunities as well as risks
An important and often overlooked aspect of the requirement – highlighted by the Commission – is that the requirement is to “gain awareness about the opportunities and risks of AI and possible harm it can cause” (emphasis added). This indicates an active requirement to promote AI use. As President Trump recently lamented in his latest White House order to federal government, it is not resources to develop AI which are lacking in the US, but willingness to engage by the business community outside Big Tech. The EU is attempting to avoid this by weaving AI adoption into the fabric of product compliance law.

Other important global players are taking similar steps. Japan passed an AI development law in May 2025 with a direct requirement for the public to “deepen their understanding and use of AI technologies”. On the other side of the risks/opportunities coin, the Cyberspace Administration of China recently ordered that IT departments should promote AI literacy as part of a “special campaign to clean up and rectify the abuse of AI technology”. 

Should AI literacy be linked to a wider AI governance programme?
Strictly, it is a standalone requirement. However, organisations in scope of the rule will almost certainly have an emerging need for wider AI compliance. Recognising this, the Dutch AP (the first data protection authority in the EU to do so) issued guidance on AI literacy in January 2025 which encompassed a wide range of AI governance goals. It outlined a multi-year plan for organisations involving AI mapping, identifying key roles, prioritising risks, training, governance policies, monitoring, and ongoing evaluation. 

As well as governance, the Commission has confirmed that the AI literacy requirement is likely to require new contractual obligations across the supply chain, at least for high-risk systems. It should be noted that the literacy rules only apply to developers and deployers of AI systems, not of the models which underlie them. This no doubt reflects both the greater likelihood of existing specialist knowledge among model providers, and perhaps also the unwillingness of the Commission to oversee the requirement itself, given that its regulatory remit only covers General Purpose AI model providers.

Can we see examples of what other companies are doing?
Yes. In February 2025, the Commission launched a digest of practices by participants in the AI Pact, providing practical examples of approaches to AI literacy in various sectors across industry. Now with 28 case studies, the examples come from companies operating in various sectors including insurance, online booking, healthcare, ICT, construction and energy. 

How can we approach this in practice?
Your approach in practice will need to be informed by the specific challenges associated with instilling AI literacy at different levels of your organisation. C-suite individuals making significant decisions on AI must have a strong working knowledge of its risks and opportunities, but encouraging time-poor individuals to prioritise AI literacy may prove practically difficult. For operational level teams the challenge will be to identify priorities and levels of existing knowledge. AI may be a completely new field of expertise for some so sufficient resource needs to be devoted to their training to ensure effective learning.

The starting point will be to check what training on existing AI systems, or on AI generally, is already taking place in your organisation’s teams.

An AI literacy programme for those deploying higher risk systems will not only fulfil the AI literacy obligation but also help manage other legal risks including under the AI Act. Common triggers will be new or upgraded AI-enabled systems for worker monitoring or recruitment, undertaking credit scoring and health or life insurance deployments, or involvement with biometric or other high-risk systems.

It may be that you have supplier information about how AI systems you use work, or it may be publicly available if you are using a commonly available system. These can be developed into helpful tools for internal training.

This may well involve looking at whether your suppliers explain clearly how your AI systems work in practice, what data they use and the possible risks they pose. If they cannot do this, then you may struggle to use them in a compliant way.

Reviewing contract terms with suppliers of AI systems will help you to understand possible product changes, ensure conformity and check respective liabilities.

If you are delivering in-scope products and services to your customers, you are likely to be asked the same questions. Should your contract terms be clearer about who delivers what? Risks will be more acute for any business which acts as a supplier of AI systems, rather than of AI-enabled output. Obligations may be triggered by modification or rebranding of existing systems.

As the Dutch AP found, in practice it will be impossible to undertake AI literacy usefully without some look at the wider governance picture. AI literacy programmes will sit beside AI policies: for example, on staff use of shadow AI, and (for generative AI) rules around input and output information. It would make sense to consider policies together with literacy, as new systems are integrated into a business.

In the same vein, consider your impact assessments. Data protection impact assessments in relation to AI systems (almost certainly required for systems involving personal data, both in the UK and the EU) will assist training, as well as being a good starting point for logging compliance. The same exercise will help ensure your DPIAs reflect regulator guidance on AI systems This will also apply to fundamental rights impact assessments as they become mandatory for EU public sector deployments, andfor credit scoring or life and health insurance.

Alice Wallbank is Senior Professional Support Lawyer in Shoosmiths’ privacy and data team, providing specialist research, training and insight on global data laws. She was formerly principal legal counsel for the cyber and intelligence division of QinetiQ Plc advising on early AI projects involving novel data use, security and supply chain control.

Naomi is an Associate in the Commercial Disputes team at Bristows. She has a particular interest in IT disputes and the impact of AI on legal practice.

This article is also available in the special AI issue of Computers & Law, which is available to download here.

A human finger touches the fingertip of a glowing blue digital hand, which is rendered as a wireframe structure. The background is a dark blue, speckled with small glowing particles and abstract geometric shapes, suggesting a digital or technological environment. The word “AI” is visible in white text at the top of the image.