The EU AI Act Is About to Start Biting — Is Your AI System Ready?

For the past two years, the EU AI Act has been something most technology companies could reasonably file under “watch and wait.” The regulation was passed, the deadlines were set, and the compliance industry produced an avalanche of white papers, webinars, and checklists. But for the engineering teams actually building and running AI systems, it was still largely theoretical.

That’s changing.

The Act’s main enforcement provisions take effect in August 2026 — or December 2027 if a preliminary Council extension holds, though that extension isn’t yet law. Either way, the window for “watch and wait” is closing. And based on what I’m seeing in the market, most companies using AI in their products are significantly unprepared.

What Actually Changes This Year

The EU AI Act establishes a tiered risk framework. Most of the early attention went to the prohibited practices tier, like social scoring by governments or real-time biometric surveillance in public spaces. Those provisions have been in effect since February 2025.

What kicks in later this year are the obligations for high-risk AI systems, and this is where the majority of commercial AI products land.

High-risk under the Act doesn’t mean dangerous in the colloquial sense. It means AI systems used in specific contexts where consequential decisions affect individuals. The categories include:

  • Employment: CV screening, candidate ranking, employee performance monitoring
  • Credit and insurance: creditworthiness assessment, risk pricing
  • Education: student assessment, admission decisions
  • Access to essential services: benefit eligibility, social service allocation
  • Law enforcement: risk assessment of individuals
  • Migration: visa and asylum processing

If you’re building SaaS products that touch any of these categories and you have EU users, the obligations apply to you regardless of where your company is based. The EU AI Act has explicit extraterritorial reach, just like GDPR.

The Compliance Gap Is Real

Here’s what makes this moment interesting: the regulatory deadline is arriving faster than enterprise readiness.

A recent analysis found that over half of organizations lack a systematic inventory of AI systems currently running in production. You can’t comply with regulations governing your AI systems if you don’t know what AI systems you have.

Beyond the inventory problem, the high-risk obligations are genuinely demanding. They require:

Risk management systems that identify and assess risks on an ongoing basis– not a one-time exercise at deployment, but a continuous process throughout the system’s operational lifetime.

Data governance practices ensuring training and validation data meets quality criteria, is representative, and has been examined for potential biases.

Technical documentation describing the system’s purpose, design, development process, and performance characteristics in sufficient detail that a regulator could evaluate it.

Automatic logging of events throughout operation, in a form that enables traceability and post-incident investigation.

Transparency sufficient for users to understand the system’s capabilities, limitations, and appropriate use.

Human oversight mechanisms allowing humans to understand, monitor, and intervene in the system’s operation.

Post-market monitoring: ongoing collection and analysis of performance data after deployment.

Each of these is a real engineering and organizational challenge, not just a documentation exercise.

Why “We’ll Deal With It When Regulators Come Knocking” Is a Bad Strategy

The most common response I hear from engineering and product teams is some version of: “We’ll worry about compliance when regulators actually start enforcing it. There’s always a gap between when a regulation passes and when it has teeth.”

This worked for GDPR in 2018. It’s a riskier bet this time, for three reasons.

The fines are larger. GDPR penalties cap at €20M or 4% of global annual turnover. EU AI Act penalties for high-risk non-compliance reach €15M or 3% of global turnover, and for prohibited practices, €35M or 7%. These are material numbers even for large companies.

Regulators are better prepared. The EU spent four years watching GDPR enforcement play out before the AI Act passed. National supervisory authorities have had time to build AI-specific technical capabilities. The gap between regulation and enforcement is smaller this time.

M&A due diligence is already here. Even if regulators move slowly, the compliance gap is already showing up in acquisition conversations. Investors and acquirers are including AI Act compliance status in technical due diligence. A company that can’t demonstrate a basic AI system inventory and risk classification is carrying potentially significant regulatory liability on its balance sheet.

What the Next Six Months Look Like for Most Companies

For companies that haven’t started, the realistic path to minimum viable compliance for a high-risk AI system looks something like this:

Month 1: Inventory. Identify every AI system in production that touches EU users. Classify each one by risk tier. This alone surprises most organizations. The number of AI systems in production is almost always larger than anyone thought.

Month 2: Gap analysis. For each high-risk system, map current capabilities against each Article’s requirements. What documentation exists? What logging is in place? What does human oversight actually look like in practice versus on paper?

Month 3–4: Remediation. Fix the gaps. This is mostly engineering work: logging infrastructure, documentation, risk management processes. The technical requirements are demanding but not exotic.

Month 5–6: Evidence collection. Build the audit-ready evidence package. The regulation requires not just that you do these things, but that you can demonstrate you’ve done them.

Six months is tight for organizations starting from zero. It’s achievable for teams that move quickly and focus on what the regulation actually requires rather than over-investing in adjacent activities.

The Opportunity Inside the Obligation

There’s a less-discussed dimension to all of this. The companies that build robust AI governance infrastructure now aren’t just avoiding fines, they’re building a durable competitive advantage.

Enterprise buyers, particularly in regulated industries like financial services, healthcare, and insurance, are increasingly making AI governance capability a vendor selection criterion. The ability to demonstrate that your AI systems are auditable, documented, and monitored is becoming a sales asset, not just a compliance checkbox.

The same is true in the talent market. Engineers who understand AI governance and can build compliant production AI systems are increasingly in demand. The MLOps skill set is expanding to include governance and compliance, and the engineers who get ahead of that curve will have significant career options.

What I’m Working On

I’ve been building in this space for the past several months — specifically on the technical monitoring side of AI compliance, the infrastructure that makes ongoing compliance observable rather than a one-time audit exercise.

Over the coming weeks I’ll be writing more specifically about the technical implementation side: what the logging requirements actually mean for your infrastructure, how to build risk management systems that satisfy Article 9, and what post-market monitoring looks like in a production engineering context.


This post is for general informational purposes only and does not constitute legal advice. Consult qualified EU privacy counsel for compliance guidance specific to your situation.

Leave a Reply

Your email address will not be published. Required fields are marked *.

*
*