The EU AI Act entered into force on August 1, 2024. If you're building or deploying AI systems and you have users in Europe, you're now living under the most detailed AI regulatory framework ever written. And based on every conversation I've had with compliance teams across the industry, most of you aren't ready. This isn't GDPR 2.0. It's harder. The scope is wider. The technical requirements are more specific. And the penalties — up to €35 million or 7% of global annual revenue, whichever is higher — make GDPR's fines look like parking tickets. Let me walk through what's actually happening, who's affected, and what it costs to comply. Because the enforcement timeline isn't theoretical anymore. It's here. ## The Timeline Nobody Memorized The AI Act uses a staggered enforcement schedule. Here's what's already in effect and what's coming: **February 2, 2025 — Prohibitions on unacceptable AI practices.** Already live. This bans AI systems used for social scoring (ranking individuals based on personal characteristics or behavior), real-time biometric identification in public spaces (with narrow law enforcement exceptions), manipulation of human behavior through subliminal techniques, and exploitation of vulnerabilities of specific groups. If you're running a social credit system, manipulative AI, or untargeted facial recognition in public spaces — congratulations, you're already in violation. The EU isn't messing around on these categories. No grace period. No "we'll fix it soon." Banned means banned. **August 2, 2025 — General-purpose AI model obligations.** This is where it gets interesting for the big labs. Providers of general-purpose AI models — think GPT-5, Claude, Gemini, Llama — must now comply with transparency requirements. Specifically: - Publish a sufficiently detailed summary of training data content - Adopt a copyright compliance policy - Provide technical documentation to downstream providers and supervisory authorities For models classified as posing "systemic risk" — any model trained with more than 10^25 floating-point operations — the requirements escalate: model evaluations, adversarial testing, risk assessment and mitigation, serious incident reporting, and cybersecurity measures. Every frontier model from OpenAI, Anthropic, Google, and Meta clears the 10^25 FLOP threshold. They're all subject to the full set of obligations. A General-Purpose AI Code of Practice was published on July 10, 2025, covering transparency, copyright, safety, and security. Participation is voluntary, but it's essentially a guidebook for demonstrating compliance. Companies that follow it are in a much stronger position when enforcement actions come. **August 2, 2026 — High-risk AI system requirements.** This is the big one. AI systems used in healthcare, education, recruitment, critical infrastructure management, law enforcement, and justice must undergo conformity assessments before deployment. These assessments evaluate quality, transparency, human oversight, and safety. Some systems require Fundamental Rights Impact Assessments — ex ante reviews that identify and mitigate potential impacts on fundamental rights before deployment. High-risk systems must also have ongoing monitoring throughout their lifecycle. It's not a one-time certification. You're continuously responsible for your system's compliance. **August 2, 2027 — Full enforcement.** All remaining provisions become applicable, including requirements for AI systems embedded in regulated products (medical devices, automotive systems, etc.). ## Who's Actually Affected The AI Act applies extraterritorially. If your AI system has users in the EU, you must comply — regardless of where your company is based. This is the GDPR playbook: make compliance a condition of market access, and the market is too large to ignore. Here's who's in the crosshairs: **Foundation model providers.** OpenAI, Anthropic, Google, Meta, Mistral, and every company providing general-purpose AI models. The training data transparency requirement alone is a nightmare for companies that have been deliberately vague about what data they used. **Enterprise software companies.** Anyone deploying AI in high-risk domains — HR tech using AI for hiring, EdTech using AI for student assessment, healthcare companies using AI for clinical decisions. These companies face conformity assessments, fundamental rights impact assessments, and ongoing monitoring requirements. **Banks and financial services.** AI-driven credit scoring, insurance risk assessment, and fraud detection all fall under high-risk. Financial institutions that've been deploying ML models for years without transparency requirements are suddenly facing documentation obligations that many don't have the infrastructure to meet. **Law enforcement and government agencies.** Predictive policing systems, AI-assisted judicial decisions, and biometric identification systems face the strictest requirements in the entire Act. The use of real-time biometric identification in public spaces requires prior judicial or administrative authorization in most cases. ## The Compliance Cost Problem Here's the number nobody wants to talk about: what does compliance actually cost? Industry estimates vary wildly, but the range for a mid-size company deploying high-risk AI systems is €200,000 to €2 million per system for initial conformity assessment. That includes technical documentation, testing, audit trail implementation, human oversight mechanisms, and external validation. For frontier model providers, the costs are orders of magnitude higher. Training data documentation alone requires cataloging datasets that may contain trillions of tokens from millions of sources. Building the compliance infrastructure — the monitoring systems, the incident reporting mechanisms, the adversarial testing pipelines — is an enterprise software project in its own right. OpenAI reportedly hired dozens of compliance staff for EU operations. Anthropic has a dedicated regulatory team. Google and Meta had existing regulatory infrastructure from GDPR that they're expanding. Smaller companies — the ones building AI tools with 20 employees and $5 million in funding — are staring at compliance costs that could exceed their entire annual budget. This creates a regulatory moat. The companies that can afford compliance — the big labs, the enterprise giants — gain a structural advantage over smaller competitors who can't. Some AI startups have already announced they'll geo-block EU users rather than comply. Others are scrambling to find compliance-as-a-service providers, a market that barely exists yet. ## What's Different From GDPR People keep comparing the AI Act to GDPR. The comparison is understandable but misleading. GDPR is primarily about data handling — consent, storage, access rights, breach notification. The compliance requirements, while substantial, are fundamentally about processes and documentation. You can comply with GDPR by building the right data handling practices and documenting them. The AI Act goes further. It requires technical compliance — your AI system must actually perform in certain ways. High-risk systems must be accurate, reliable, and resistant to adversarial attacks. They must have human oversight mechanisms built in. They must be transparent about their decision-making process. This means compliance isn't just a legal exercise. It's an engineering exercise. You can't comply with the AI Act by hiring lawyers. You need engineers to build compliant systems, researchers to run adversarial tests, and data scientists to document training data provenance. The other critical difference: GDPR gives individuals rights (right to access, right to deletion, right to explanation). The AI Act is primarily a product regulation — it places duties on providers and deployers, not individual rights on users. Citizens can submit complaints about AI systems and receive explanations of decisions made by high-risk AI that affect their rights, but the enforcement mechanism is institutional, not individual. ## The Enforcement Question The European Artificial Intelligence Board, established by the Act, coordinates enforcement across EU member states. Each country designates its own national competent authority — similar to how data protection authorities enforce GDPR. The first enforcement actions are expected in mid-2026, likely targeting violations of the prohibited practices or general-purpose AI transparency requirements. The most probable early cases: a foundation model provider that failed to publish adequate training data summaries, or a company deploying banned AI practices (social scoring, manipulative AI) that assumed nobody was watching. The penalties escalate by category: - Prohibited practices: up to €35 million or 7% of global revenue - High-risk system violations: up to €15 million or 3% of global revenue - Providing incorrect information: up to €7.5 million or 1.5% of global revenue For context, 7% of Alphabet's global revenue is roughly $23 billion. Nobody expects a first fine anywhere near that scale. But the theoretical maximum is designed to make even the largest companies take compliance seriously. ## The Global Ripple Effect The AI Act doesn't just affect Europe. It reshapes AI regulation globally, through two mechanisms. First, the "Brussels Effect." Companies that build AI systems for the global market will design for EU compliance rather than maintaining separate products for different jurisdictions. Just as GDPR became the de facto global privacy standard because companies didn't want to maintain different data practices for different markets, the AI Act will push global AI development toward EU compliance standards. Second, other jurisdictions are watching. Brazil passed AI legislation in 2025. The UK is developing its own framework. Canada, South Korea, and Japan are all working on AI governance approaches. The AI Act provides a template — not necessarily one everyone will copy, but one that shapes every other conversation about AI regulation. The US remains the outlier. The Trump administration rolled back existing AI executive orders and has shown no interest in federal AI legislation. California's SB 1047 was the most significant state-level attempt at AI safety regulation, but it faces uncertain enforcement. The regulatory gap between the US and EU is widening, creating headaches for companies operating in both markets. ## What Companies Should Do Now If you're deploying AI systems and have EU users, here's the honest checklist: **Immediately:** Ensure you're not operating any prohibited AI practices. Social scoring, manipulative AI, and untargeted biometric identification are already banned. **By now (August 2025 deadline has passed):** If you provide a general-purpose AI model, your training data summary and copyright policy should be published. If they're not, you're already non-compliant. **Before August 2026:** Conduct risk assessments for all AI systems deployed in the EU. Determine which fall under high-risk. Begin conformity assessments for those systems. Budget for external audits. **Ongoing:** Build monitoring infrastructure. Implement incident reporting mechanisms. Train staff on compliance requirements. Document everything. The companies that treat this as a checkbox exercise will get burned. The ones that embed compliance into their development process — the way smart companies embedded GDPR into their data architecture — will find that regulatory compliance and good AI development practice overlap more than they diverge. The EU AI Act isn't perfect. It's bureaucratic, complex, and in places unclear. But it's real, it's enforceable, and it's coming for everyone who thought they could ship AI without guardrails. The grace period is over. Get ready, or get out of the European market.