Although Regulation EU 2024/1689 on Artificial Intelligence (hereinafter RAI, or the Regulation) will generally enter into effect on 2 August 2026, we must mark several key dates on the calendar that affect the applicability of certain obligations. One of those “save-the-dates” will arrive in a few days, specifically on 2 February.
In accordance with Art. 113(a) of the RAI and its Recital 179, “taking into account the unacceptable risk associated with the use of AI in certain ways”, some obligations of the Regulation will become applicable from 2 February 2025. Specifically, they include prohibited AI practices (Chapter 2 of the RAI) and the General Provisions of the Regulation (Chapter 1), which in practice translate into the enforceability of the AI literacy requirements (Art. 4 of the RAI).
What are “prohibited AI practices”?
As we know, the RAI approach is a risk approach in which risks are assessed not in terms of the use of AI in general, but rather its connection to specific purposes. In this case, what lawmakers have done is prohibit the use of AI systems for certain practices or purposes. It is not easy to summarise these prohibited uses, set forth in Art. 5 of the RAI, due to the number of nuances applicable to each one, but it could be said that they essentially are uses such as:
- Uses through which the behaviour of a person or group is sought to be altered in order to cause them harm, using subliminal, manipulative or deceptive techniques, or exploiting vulnerabilities due to from their age, disability, etc.;
- Evaluating or classifying persons or groups based on their social behaviour or personality characteristics, such that they experience detrimental treatment in social contexts that are unrelated to the contexts in which the data was generated or collected, or when this detrimental treatment is disproportionate to their social behaviour.
- Assessing the risk of a particular natural person committing a crime on the basis of profiling or considering certain personality traits (with exceptions, since AI systems may be used to support human assessment based on objective parameters of such risks);
- Creating facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage
- Inferring emotions of people in the areas of workplace and educational institutions (with exceptions related to safety reasons).
- Using biometric categorisation systems that classify people based on the data obtained to infer their race, political opinions, religious beliefs, sex life and sexual orientation, etc.
- Using real-time remote biometric identification systems in public spaces for the purposes of law enforcement, with certain exceptions.
No AI system to which the Regulation applies may therefore be used for the aforementioned purposes as of 2 February 2025.
What are the AI literacy requirements?
Another obligation that takes effect is that of AI literacy. It requires AI system providers (those who develop them or introduce them to the market) or deployers (those who use them, under their own authority, for professional purposes) to adopt measures so that their staff has a sufficient level of knowledge of how this technology works. There are no particular indications as to what these education and training measures should entail. Nevertheless, criteria is provided such as, for example, that training should focus on people who, on behalf of the provider or deployer, are responsible for operating the AI system or those who are going to use it. The technical knowledge, education, experience and training of these people, as well as the context in which the systems will be used must also be taken into account.
Who does it affect? How to act?
In this case, it is a mandate for AI system providers, but also, as mentioned, for deployers – those who use them, under their own authority, for professional purposes. Moreover, these companies are required to take measures to ensure that their staff has a sufficient level of knowledge of how this technology works. Companies are made up of people, and the mistakes they may make due to being misinformed about how to work with these systems or how to use them in their professional practice can have important consequences. This obligation is therefore aimed at avoiding or minimising human error. Although it is not explicitly stated in the regulation, it seems clear that these measures should consist of implementing a programme of education and training sessions for the staff who will be in contact with and will use AI algorithms within each organisation, since the RAI indicates that “they should focus on the people who, on behalf of the provider or the deployer, are responsible for operating the AI system, or on those who will use it.” The technical knowledge, education, experience and training of these people, as well as the context in which the systems will be used must also be taken into account.
At PONS IP we understand that this is an ongoing obligation, which will require, where appropriate, updates as technology continues to develop, as well as new training for new incoming staff, where appropriate. It will also be necessary to document the actions carried out in order to demonstrate compliance.
Violeta Arnaiz. Head of Intellectual Property, AI & Software