The EU AI Act, Two Years On: Regulation, Uncertainty, and the Startup Question by Andrei Pavel

0

This December marks two years since the European Parliament and the Council of the European Union agreed on the EU Artificial Intelligence Act. What was then framed as a bold regulatory vision has since hardened into a calendar of obligations that Europe’s tech sector must now plan around.

The Act entered into force in August 2024, but it was designed to take effect in multiple stages. February 2025 marked the first real compliance moment, when bans on so-called ‘unacceptable risk’ AI practices and mandatory AI literacy requirements became law.

August 2025 is when governance provisions and the regulatory regime for general-purpose AI models also began to apply. In parallel, the European Commission has issued guidance and voluntary codes of practice, while engaging with sustained pressure from large technology companies and industry coalitions. That pressure has fuelled public debate over slowing, simplifying, or even temporarily postponing parts of the regime especially around high-risk systems.

By August 2026, most of the Act’s obligations will apply in full, with limited extensions to 2027 for AI embedded in regulated products.

However, intense lobbying coupled with rising concerns over regulatory complexity, has fed into broader political discussions around a proposed ‘digital omnibus’ initiative aimed at simplifying parts of the EU’s digital regulatory stack. While no formal pause has been adopted, the AI Act is now entangled in debates about burden reduction and competitiveness.

For most European tech startups, the AI Act has added a complexity layer on the compliance front. Evolving guidance and still-unfinished technical standards make it difficult for founders, especially at early stages, to assess whether their products will be classified as high-risk, when compliance will be required, and at what cost. This ambiguity alone can be enough to delay product decisions, fundraising, and market expansion.

A smaller minority, largely B2B startups serving regulated sectors, are attempting to turn AI Act readiness into a differentiator. But for the majority, the AI Act currently operates less as a trust framework and more as a moving target, encouraging some startups to keep an eye on friendlier jurisdictions until enforcement and standards finally stabilize.

There is a déjà vu feeling for tech companies, who may have already been dissuaded by the very high standard of privacy by design in the early days of the GDPR. Europe has already written the world’s first AI rulebook for trustworthy AI, but what remains undecided is whether it will be implemented as a framework for innovation or remembered as a cautionary tale about governing faster than early-stage companies can innovate.

Share.

Comments are closed.