Six months after its approval on 21 May by the “European trilogue” (Parliament, Council and Commission), the Regulation on Artificial Intelligence entered into force on 1 August.
It is no coincidence that this issue has given rise to one of the most complicated negotiations in the European Commission in living memory. Those of us who were there at the birth of the Internet believe that, 30 years later, we are facing another new technology that will impact all facets of our lives.
This Regulation regulates the marketing and use of artificial intelligence systems. It is the first-ever regulation on AI in the world and introduces a series of obligations for players in the value chain (suppliers, importers, those responsible for deployment, etc.). It also creates supervisory authorities and promotes the creation of regulatory testing environments.
The Regulation is structured around the level of risk of new AI algorithms and systems. In this sense, the Regulation on AI classifies this risk as unacceptable, high or limited to the rights and freedoms of European citizens, imposing more obligations on actors in the value chain.
The Regulation on AI will enter into force now, but will be implemented progressively.
The general rule is that the Regulation on AI will be implemented 24 months after entry into force, i.e. after 2 August 2026. However, there are relevant exceptions to the implementation of the obligations contained in the Regulation on AI that need to be taken into account:
- General rule: implemented 24 months after entry into force of the text.
- Prohibited AI systems: 6 months after entry into force of the Regulation on AI.
- Rules for general-purpose AI, including governance: 12 months after entry into force of the text.
- Obligations for high-risk systems: as a general rule 24 months, and 36 months for systems listed in Annex I of the Regulation on AI.
To ensure efficacy and compliance with the provisions of the Regulation on AI, Member States may set up entities to supervise its implementation. In this sense, Spain was a pioneer in setting up the Spanish Agency for the Supervision of Artificial Intelligence.
One of the challenges that this regulation has faced is the emergence, over the months during which it was being drafted, of so-called generative AI (the famous ChatGPT, Copilot, Gemini, etc.) and the need, therefore, to also regulate “general-purpose” AI systems.
In the spirit of the WAIQ initiative (which drives the debate on Web3, AI and Quantum Computing), it is necessary to approach this Regulation with a holistic view, from legal, juridical and ethical perspectives.
In this sense, although the Regulation could have gone into more depth on this issue, it has found itself in the position of trying to provide transparency on the content with which artificial intelligence (AI) is trained and, at the same time, allowing AI system providers to protect their own intellectual property rights and trade secrets.
In this sense, among the special obligations that the Regulation requires of providers of general-purpose AI systems are those aimed at ensuring respect for the intellectual property rights of third parties.
In conclusion, it is important to note that all companies and organisations will be affected to a greater or lesser extent by this Regulation. Most companies are waiting to see how its implementation will be carried out. In some cases it is seen as an evolution of personal data management regulations. We see it as a more complex process involving many departments in the company and where it is necessary to carefully take care of all Intellectual Property-related aspects with the support of specialists in the field.
Luis Ignacio Vicente del Olmo | Strategic Advisor at PONS IP