The EU AI Act is reshaping how AI is built and deployed globally. Learn about risk categories, compliance deadlines, and what it means for your AI projects.
The European Union's AI Act is the world's first comprehensive legal framework for artificial intelligence, and in March 2026, it is finally becoming real. After years of negotiation, delays, and industry lobbying, enforcement has begun. Some provisions are already active, others take effect later this year, and the implications extend far beyond Europe's borders.
If you build, deploy, or use AI systems, the EU AI Act affects you. Even if your company is not based in Europe, if your AI product is accessible to EU citizens, you are in scope. This guide covers the current state of the law, what is banned, what requires compliance, and what developers and businesses need to do right now.
The EU AI Act was first proposed by the European Commission in April 2021. It went through extensive revision, particularly after the explosion of generative AI in 2023 forced legislators to add provisions they had not originally anticipated.
Key milestones:
The delays are significant. The original timeline called for GPAI compliance by August 2025, but intense lobbying from major AI companies (including several open-source advocacy groups) pushed the deadline back. As of March 2026, the compliance framework for general-purpose AI models is still being finalised, with full enforcement now expected in mid-2026.
Key Takeaway: The EU AI Act is real and enforceable, but its implementation has been slower and more complex than originally planned. The delays do not mean you can ignore it. They mean you have a narrowing window to prepare.
The core of the EU AI Act is a risk-based classification system. Every AI system falls into one of four categories, and the requirements scale with the risk level.
These AI applications are prohibited outright. As of February 2025, the following are illegal in the EU:
In March 2026, the EU also moved to ban AI applications that generate non-consensual intimate imagery (so-called "nude apps"). This ban, prompted by a surge of these applications targeting minors, carries some of the Act's most severe penalties.
AI systems in these categories must meet strict requirements before they can be deployed:
High-risk systems must maintain technical documentation, implement risk management systems, ensure data quality, provide human oversight mechanisms, and register in the EU database before deployment.
AI systems with limited risk have transparency obligations. This primarily means:
The vast majority of AI applications (spam filters, AI in video games, inventory management) fall here and face no specific requirements under the Act.
The provisions most relevant to the current AI landscape are the rules for general-purpose AI models, which include large language models like GPT, Claude, Gemini, Llama, and Mistral.
A general-purpose AI model is one that can perform a wide range of tasks, regardless of how it is placed on the market. This explicitly includes foundation models and large language models.
Every GPAI provider must:
Models classified as having "systemic risk" (currently defined as models trained with more than 10^25 FLOPs of compute, though this threshold is under review) face additional requirements:
Key Takeaway: If you are building on top of a foundation model (using an API from OpenAI, Anthropic, Google, or others), the GPAI provider handles most compliance. But if you are fine-tuning, deploying, or distributing models, you may have your own obligations.
One of the most contentious aspects of the EU AI Act is its treatment of open-source AI. The Act provides exemptions for open-source models, but these exemptions are narrower than many in the community hoped.
Open-source GPAI models are exempt from most GPAI requirements if:
Even open-source models must:
For projects like HuggingFace's open model ecosystem, Llama from Meta, and Mistral's open-weight models, the key question is the training data transparency requirement. Providing detailed summaries of training data is a significant burden for models trained on internet-scale datasets, and the specifics of what constitutes a "sufficiently detailed summary" are still being debated.
The broader concern is that compliance costs could discourage open-source AI development in Europe, pushing innovation to jurisdictions with lighter regulation. This is a tension the EU is actively trying to manage, and the final guidance (expected mid-2026) will be critical.
Whether you are building AI products, integrating AI into existing applications, or researching AI systems, here are the concrete steps you should take.
Determine which risk category your AI application falls into. If you are building a chatbot for customer service, you are likely in the "limited risk" category (transparency obligations). If you are building AI for recruitment or credit scoring, you are in "high risk" territory with significant compliance requirements.
The Act places strong emphasis on data quality and provenance. Document where your training data comes from, how it was collected, and what steps you have taken to ensure it is representative and unbiased. This documentation is not optional for high-risk systems.
At a minimum, ensure your AI systems:
High-risk AI systems must have human oversight mechanisms. This means designing your systems so that humans can:
The EU AI Office is publishing implementation guidance throughout 2026. Key documents to watch include the Code of Practice for GPAI (finalised in draft, with industry feedback ongoing) and the technical standards for high-risk systems.
Key Takeaway: Compliance is not a one-time checkbox. It requires ongoing documentation, monitoring, and adaptation as guidance evolves. Start now, even if some requirements are not yet finalised.
The EU AI Act does not exist in isolation. Its influence is shaping AI regulation worldwide, a phenomenon sometimes called the "Brussels Effect."
For a deeper exploration of the ethical principles underlying AI regulation, see our article on responsible AI and ethics.
The EU AI Act is the beginning, not the end, of AI regulation. As AI capabilities continue to advance rapidly, expect the regulatory framework to evolve just as quickly.
The AI Branches program includes dedicated modules on AI governance, compliance frameworks, and the intersection of technology and policy. Understanding regulation is no longer optional for AI practitioners. It is a core competency.
Whether you view the EU AI Act as necessary protection or excessive bureaucracy, one thing is clear: the era of building AI without considering its societal impact is over. The developers and organisations that embrace responsible AI practices will be better positioned for a future where regulation is the norm, not the exception.
Start with AI Seeds, a structured, beginner-friendly program. Free, in your language, no account required.