Nov 28, 2024
The Worlds First AI Law comes into Effect in the EU
Artificial intelligence
Since the introduction of generative artificial intelligence (AI), experts have been in agreement: the rapid progress of this technology poses complex challenges, particularly in terms of ethics, security and regulation.
Taking the world by surprise, or nearly so, this form of AI did not give authorities enough time to prepare adequately or to regulate its use. Today, governments, businesses, and researchers must collaborate to determine the future of AI and the laws that will be developed.
In Canada, while there are guidelines that companies can follow if they wish, no official law has yet been established on this matter. In fact, the first regulation to emerge globally is the AI law of the European Union (EU), which came into effect on August 1st.
This legislation could serve as a reference for all future laws worldwide. Interested in discovering what the pioneers in AI regulation have decided? Curious to know which of these measures is causing the most buzz? Here are the main points, summarized.
Classification according to risk level
The law begins by classifying AI systems into four categories: unacceptable, high, limited and minimal risk. Unacceptable risks, i.e. those that go against the EU’s fundamental values and undermine individuals’ fundamental rights, are simply prohibited. These include social scoring systems that could allow states to use remote biometric recognition to monitor citizens and assign them a score based on their behaviour. Another example would be an AI system that could undermine the right to a fair trial by predicting an individual’s likelihood of reoffending and influencing court decisions.
The majority of the law deals with AI models that are considered high-risk. What does this category encompass? Basically, it’s anything that involves critical decisions related to major infrastructure (water, heating, electricity, traffic), education and assessments, recruitment and selection of people, people’s eligibility for essential benefits and services, risk assessment and pricing for life and health insurance, assessment of emergency calls, triage of patients in hospitals, review of asylum and visa applications, or even interpretation of laws in a judicial setting. In short, anything that could cause an ethical conflict if the AI used were biased instead of impartial.
Cases that are considered minimal risk, such as the majority of AI applications on the market today, are not or only slightly regulated. As for those with limited risks, such as conversational agents ("chatbots"), they are subject to lighter standards than high-risk models.
High-risk AI models and Their Obligations
To begin, it is crucial to understand that all the requirements set by this law fall on the developers of high-risk AI. To comply with the law, they must implement a risk management system for the entire lifecycle of the project. This involves several procedures to ensure that the AI model respects human rights and does not make biased decisions. Developers are therefore required to anticipate everything that could go wrong and create a checklist to manage these risks.
In addition, they must ensure data governance and security, prepare technical documentation demonstrating the project’s compliance, and provide users with human oversight to guarantee the quality of the outputs.
A Special Clause for General-Purpose AI
This primarily refers to generative artificial intelligence, such as ChatGPT and Copilot. These models, in addition to complying with the laws for high-risk systems, will need to establish a copyright policy and publish a detailed summary of the content used to train the model.
This clause is particularly interesting yet polarizing: currently, generative AI models on the market meet these two criteria to varying degrees. Transparency from companies about the data used to train their models is virtually nonexistent, leading to suspicions that they were trained using copyrighted material without compensating the creators.
To be clear, none of the major companies behind generative AI models has confirmed such practices. And honestly, it’s understandable why they might be hesitant to address this, given the ongoing lawsuits concerning copyright and AI that we keep hearing about. Would you be willing to say goodbye to ChatGPT if it turned out this model was built using content for which the company did not hold the rights?
Moreover, intense lobbying has been carried out by the AI industry in Europe, supported by a handful of government members. The aim was to reduce the obligations imposed on general-purpose AI (GPAI), as there were concerns that the law might put Europe at a disadvantage in its ability to develop AI and prevent it from remaining competitive with the United States and China. This debate between technology and ethics is far from over!
What are the implications for us?
The EU AI legislation came into effect on August 1st. Office AI, an organisation tasked with verifying compliance of AI providers and handling complaints, was set up to ensure compliance with the laws. Companies will have, depending on the risk level of their AI models, between 6 and 36 months to comply with the new law.
Already, big names like Meta and Apple have announced that some of their models will simply not be available in Europe. How will this situation develop? Will the fear of losing access to cutting-edge technology prevent regulators around the world from supporting the EU in its decisions?
While we wait for our governments to decide, what should our companies do to prepare? Certainly, putting in place appropriate data governance measures is essential. Also, among the actions we can take upstream, we find the classification of AI models according to the risk levels determined by the EU, awareness-raising among employees and identification of use cases. Above all, I advise you to keep up to date with the evolution of the situation!