The Regulation Has Landed. Most Enterprises Are Not Ready.
The EU Artificial Intelligence Act entered into force in August 2024. Its prohibited AI practices rules applied from February 2025. High-risk system obligations begin August 2026. If you are deploying AI systems in Europe — or deploying systems that affect EU citizens anywhere — the clock is running.
Most guidance written on the AI Act is either:
- Legal analysis aimed at counsel, not operators
- Vendor marketing designed to sell compliance software
- Academic commentary that predates the final text
This article is neither. It is a practical summary for the executive responsible for AI deployment decisions.
The Risk Classification System
The Act classifies AI systems into four tiers:
Prohibited (Banned Outright)
Systems in this category may not be deployed at all, with narrow law enforcement exceptions:
- Social scoring systems operated by public authorities
- Real-time remote biometric identification in public spaces
- Subliminal manipulation systems that exploit vulnerabilities
- Emotion recognition in employment and education contexts (with specific exceptions)
If any of your current or planned AI deployments touch these categories, legal review is not optional — it is urgent.
High-Risk
This is where the majority of enterprise obligations concentrate. High-risk systems include AI used in:
- HR and employment — recruitment screening, performance evaluation, work allocation
- Credit and insurance — creditworthiness assessment, risk scoring
- Safety-critical infrastructure — energy grids, water, transport
- Education — access decisions, evaluation of students
- Law enforcement and migration management (significant restrictions)
- Administration of justice
High-risk systems require:
- Risk management system (documented, ongoing)
- Data governance requirements for training data
- Technical documentation
- Automatic logging of events
- Transparency measures
- Human oversight measures
- Accuracy, robustness, and cybersecurity standards
The practical implication: If you use AI for hiring, performance review, or credit decisions, you are in high-risk territory.
Limited Risk (Transparency Obligations)
Chatbots and systems that interact with humans must disclose their AI nature. This is already table stakes for most enterprises, but documentation requirements are tightening.
Minimal Risk
The vast majority of AI applications — product recommendations, spam filters, AI-assisted drafting — fall here. No specific obligations, though general EU law (GDPR, product liability) still applies.
The Three Questions Your Board Should Be Asking
1. Do we have an AI inventory?
You cannot assess your compliance posture without knowing what systems you have deployed and where they sit in the risk classification hierarchy.
An AI inventory is not just a list of vendors. It is a documented record of:
- Every AI system in production or in development
- Its classification under the Act
- Its purpose, inputs, outputs, and decision influence
- The human oversight mechanisms in place
Most enterprises do not have this. Start here.
2. Who is the provider and who is the deployer?
The Act distinguishes between AI providers (who build and place systems on the market) and deployers (who use them). Most enterprise AI today involves both roles simultaneously — you may be building custom models on third-party infrastructure, or fine-tuning foundation models for specific applications.
Your obligations differ materially based on this distinction. The ambiguity in enterprise AI supply chains is one of the most underappreciated compliance risks.
3. What is our governance structure?
High-risk system compliance is not a one-time audit. It is an ongoing operational function. You need:
- Designated accountability (who owns AI compliance decisions?)
- A risk management process integrated with deployment workflows
- A logging and incident response capability
- A vendor management process that includes AI Act requirements
What to Do in the Next 90 Days
Month 1: Inventory and classify. Document every AI system in use or in development. Apply the risk classification. Identify high-risk systems.
Month 2: Gap analysis. For each high-risk system, assess your current documentation, oversight mechanisms, and governance against Act requirements. Identify the gaps.
Month 3: Remediation roadmap. Prioritise gaps by deadline and materiality. Build the governance structure. Assign ownership.
This is not a legal exercise. It is an operational one. The enterprises that treat AI compliance as an infrastructure problem — building it into their deployment processes rather than bolting it on afterward — will carry it at significantly lower cost.
41dots advises enterprises on EU AI Act compliance as part of our Govern service layer. If you need a structured compliance assessment, contact us.