Understanding the EU AI Act
Thierry Boulangé, Deputy Head of Unit at the European Commission’s Directorate-General for Communications Networks, Content, and Technology, outlines the EU AI Act’s goals and implementation framework.
The EU AI Act, which entered into force in August 2024, is the world’s first comprehensive legislation regulating AI. Its goal is to ensure that AI systems are safe, uphold fundamental rights, and foster innovation.
Boulangé goes into detail on the Act’s tiered, risk-based framework, which is designed to harmonise rules across EU member states and prevent a fragmented regulatory landscape.
The Act categorises AI systems into four levels of risk:
- Unacceptable risk: AI applications deemed harmful, such as social scoring systems or those designed to manipulate individuals, are banned outright.
- High risk: AI used in critical areas, such as recruitment, justice, law enforcement, and essential public services, must meet stringent requirements to protect safety and rights.
- Limited risk: Systems interacting with humans (e.g., chatbots) must meet transparency obligations to ensure users are aware they are engaging with AI.
- Minimal or no risk: Most AI systems fall into this category and face no specific regulatory obligations but can voluntarily adhere to codes of conduct.
The inclusion of rules for general-purpose AI models, like foundational models that power generative AI systems, was a key addition during negotiations. Developers of these systems must provide documentation to ensure responsible integration and use, with enhanced obligations for systems that pose systemic risks.
Governance and implementation
The Act establishes a robust governance structure to oversee its enforcement and implementation. High-risk AI systems will be monitored by national authorities, while the European AI Office, recently established, will handle cross-border and general-purpose AI cases. Complementary entities, such as the AI Board and the Advisory Forum, will enable coordination among member states, regulators, and stakeholders.
The implementation timeline reflects a staged approach:
- February 2025: Prohibitions on unacceptable AI practices apply.
- August 2025: Obligations for general-purpose AI models apply
- 2026-2027: Obligations for high-risk AI and transparency requirements will gradually roll out, culminating in full implementation by 2027.
For Irish stakeholders, Boulangé states that this phased rollout provides time to prepare for compliance while ensuring a smooth transition for businesses and innovators.
Supporting innovation amid regulation
A cornerstone of the EU AI Act, Boulangé says, is its emphasis on fostering innovation. To support developers, especially small and medium-sized enterprises (SMEs), the European Commission has launched initiatives like the AI Innovation Accelerator and regulatory sandboxes. These tools offer technical guidance and testing environments to help businesses align with the Act’s requirements while remaining competitive.
He says that standardisation will also play a crucial role. Technical standards for high-risk AI systems are under development in collaboration with European and international bodies, ensuring consistency and adaptability as technology evolves. These standards will provide clarity for developers, helping them meet regulatory requirements efficiently.
Collaborative stakeholder engagement
Asserting that the Commission prioritises inclusivity in its implementation efforts, Boulangé says that the newly-launched AI Pact, which includes over 120 organisations, aims to foster collaboration across industries and geographies. “By engaging businesses, civil society, and regulatory bodies, the Pact encourages the sharing of best practices and provides early feedback to refine the implementation modalities of the regulatory framework,” he says.
Boulangé highlights the importance of creating pathways for smaller players to contribute to and benefit from these processes. This is particularly relevant for Ireland, where SMEs form the backbone of the economy and are often at the forefront of AI-driven innovation.
International implications
The EU AI Act’s influence extends beyond Europe. Boulangé notes ongoing international collaboration, such as the recent Council of Europe treaty on AI, which incorporates global partners like the US, Japan, and Canada. Ireland’s strategic position as a tech hub places it at the intersection of EU and global AI ecosystems, offering opportunities to shape and benefit from these frameworks.
What this means for Ireland
Boulangé states that the Act also incentivises innovation, offering support for startups and SMEs to lead in the development of trustworthy AI.
For public and private leaders, the immediate priorities include:
- Understanding the Act: Familiarise teams with its risk-based framework and upcoming obligations.
- Engaging in standardisation: Participate in developing technical standards to ensure the needs of Irish companies are represented.
- Leveraging support mechanisms: Take advantage of EU-funded initiatives like regulatory sandboxes and innovation accelerators.
Concluding, Boulangé asserts that, as the AI landscape continues to evolve, the AI Act will position the EU as a “leader in shaping ethical and innovative AI systems”.