‘World’s first’ AI regulation agreed at European level
Political agreement has been reached between the European Parliament and Council of the European Union on the Artificial Intelligence Act, which has been hailed as “historic” and a “global first”.
The provisional deal for the EU AI Act, agreed in December 2023, will see rules on AI established, with safeguards, limitations, bans, and consumer rights put in place, and fines of up to 7 per cent of a company’s global turnover in place for those who violate the agreed laws.
The agreed laws for AI will include “guardrails” for general AI systems to “account for the wide range of tasks AI systems can accomplish and the quick expansion of its capabilities”. These guardrails will see all general-purpose AI systems and the general-purpose AI models upon which they are based forced to adhere to transparency requirements that were proposed by the European Parliament during negotiations, such as the drawing up of technical documentation, compliance with EU copyright law, and the dissemination of detailed summaries about the content used for AI training.
High-impact general purpose AI models with systemic risk will have to conduct model evaluations, assess, and mitigate systemic risks, conduct adversarial testing, report serious incidents to the European Commission, ensure cybersecurity, and report on energy efficiency.
Other AI systems classified as high-risk due to their “significant potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law” will be subject to mandatory fundamental rights impact assessments and other requirements applicable also to the insurance and banking sectors. Systems used to influence the outcome of elections and voter behaviour will also be classified as high risk, with citizens having the right to launch complaints against AI systems and receive explanations and decisions based on high-risk AI systems. On the other hand, limited risk AI systems will comply with minimal transparency requirements “that would allow users to make informed decisions”.
With explicit reference to “potential threat to citizens’ rights and democracy”, the Parliament and Council of Ministers also came to agreement in order to ban:
- biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
- untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
- emotion recognition in the workplace and educational institutions;
- social scoring based on social behaviour or personal characteristics;
- AI systems that manipulate human behaviour to circumvent their free will; and
- AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).
The Parliament and Council of Ministers agreed on a series of exemptions for the use of biometric identification systems (RBI) in public spaces for law enforcement, “subject to prior judicial authorisation and for strictly defined lists of crime”. Post-remote RBI would be in use strictly for the targeted search for a person convicted or suspected of having committed a serious crime. Real-time RBI will comply with strict conditions, with its use limited in time and location for the purposes of: targeted searches for victims; prevention of a specific and present terrorist threat; the localisation or identification of a person suspected of having committed one of the specific serious crimes mentioned in the regulation, such as murder, trafficking, or terrorism.
These exemptions have proved the most controversial aspect of the legislation, with Amnesty International stating that the limited ban AI facial recognition sets a “devastating global precedent”. Ella Jakubowska, senior policy advisor for European Digital Rights stated: “It is hard to be excited about a law which has, for the first time in the EU, taken steps to legalise live public facial recognition across the bloc. Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm. Our fight against biometric mass surveillance is set to continue.”
Failure to comply with the new regulations will result in companies being fined, with penalties ranging from €35 million, or 7 per cent of global turnover, to €7.5 million, or 1.5 per cent of turnover depending on the infringement and the size of the offending company.
The legislation, which has been in the offing since early 2021, passed through multiple drafts as it made its way back and forth through the political arms of the European bloc. With all member states having endorsed the Act in February 2024, the Act will enter into force during 2024 and enter into application 24 months after its entry into force except for specific provisions, such as prohibitions on unacceptable risk AI systems applying after six months and obligations for high-risk systems applying after 36 months.
The Act has been delayed in the past due to the constant evolution of AI technologies. For example, the rise of ChatGPT caused a reconsideration of the legislation that had originally been drawn up in April 2021, which had not factored in the application.
In January 2024, it was revealed by Euractiv that the European Commission would adopt a decision establishing the European Artificial Intelligence Office as part of the reforms envisioned under the new legislation, with the office due to act within the Commission as the centralised authority on the enforcement of the AI Act and AI development, monitoring the progress of initiatives such as GenAI4EU.