Perspectives

The US and EU Approaches to AI Regulation: The End of the Brussels Effect?

Kalia Ataliotou
Mar 12, 2026 / 8 min read

The transatlantic divide in artificial intelligence (AI) regulation has reached a critical inflection point. As the European Union moves to amend its landmark AI Act to bolster economic competitiveness and accelerate AI adoption, the United States is consolidating around a “minimally burdensome” federal framework centered on scale and capital deployment. These developments represent a structural realignment of the global regulatory environment and a genuine test of whether the “Brussels Effect” – the mechanism by which EU regulatory standards become de facto global norms – still holds. This represents the first real pressure test of the Brussels Effect since the GDPR made it a strategic reality.

European Union: Aligning Regulation with Economic Competitiveness

The EU AI Act, which entered into force in August 2024, remains the world’s first comprehensive AI regulatory framework. Its risk-based architecture, which classifies systems by potential harm and imposes obligations, was designed to set a durable international standard. Less than two years in, it is already being revised. In November 2025, the European Commission introduced the Digital Omnibus package, including a with targeted simplification measures on the AI Act’s key provisions. The proposed changes are substantive. Compliance deadlines for high-risk AI systems would be tied to the availability of technical standards, pushed from August 2026 to late 2027/2028. AI literacy obligations for most providers and deployers would be removed and shifted to a promotional duty for the Commission and Member States. Finally, enforcement authority would be centralized, with the Commission’s AI Office gaining exclusive oversight for GPAI-based systems and very large platforms. The European Parliament is still organizing its review, and the Council has yet to finalize its negotiating position, leaving the legislative trajectory uncertain.

This shift is driven by structural realities. It reflects a broader strategic pivot from regulatory primacy toward competitiveness, deployment, and technological sovereignty. Europe currently accounts for approximately 4-5% of global AI computing capacity, compared with roughly 74% for the United States and 15% for China. Policymakers have recognized that regulatory leadership alone cannot sustain competitiveness and sovereignty without parallel investment in infrastructure and commercialization. The April 2025 AI Continent Action Plan commits EUR 1 billion annually through existing instruments such as Horizon Europe and Digital Europe through the remainder of the current Multiannual Financial Framework (MFF), which runs until 2027. Beyond 2028, funding architecture is expected to evolve under a strengthened Horizon Europe and the proposed European Competitiveness Fund. Additionally, InvestAI aims to mobilize EUR 200 billion in total public-private investment, including EUR 20 billion in direct Commission contributions toward AI gigafactories. The Apply AI Strategy reinforces this direction by accelerating AI adoption across critical sectors such as health, manufacturing, and public services. This initiative clearly signals that the EU’s ambitions extend beyond regulatory frameworks to the active, large-scale deployment and integration of AI technologies.

Complementing these investment efforts, the proposed Cloud and AI Development Act (CADA), expected to form as part of a broader Tech Sovereignty Package, seeks to address the foundational infrastructure challenges – such as data center capacity, energy availability, and access to compute resources – that are essential for the EU to develop and host frontier AI systems domestically. In particular, CADA seeks to promote sustainable infrastructure, secure data processing, and the uptake of European-based cloud providers. Tech sovereignty (the ability to develop, train, and deploy advanced AI within European jurisdictional control) is central to the political focus underpinning these initiatives.

The combination of regulatory refinement and increased capital deployment illustrates the EU’s dual objective: maintaining high standards for safety and fundamental rights while accelerating AI adoption and capacity-building to secure European tech sovereignty. While officials publicly emphasize that “the race will be won on implementation,” the EU’s massive investment in computing infrastructure signals deep anxiety about foundational model development too, a challenging dual mandate. If the EU cannot reach agreement on compliance deadline amendments before August 2026, companies face a legal limbo: comply with rules that may soon change, or risk non-compliance with rules that are still technically in force.

United States: Federal Policy vs State-Level Action

The United States has taken a fundamentally different approach, prioritizing capital deployment, infrastructure and private-sector innovation over comprehensive federal regulation. The underlying assumption is strategic: global leadership in AI will be secured through scale, speed, and investment rather than centralized oversight. Shortly after taking office in January 2025, President Donald Trump rescinded the Biden Administration’s 2023 AI Executive Order. A subsequent executive order, titled “Removing Barriers to American Leadership in Artificial Intelligence“, framed AI development as central to economic and geopolitical competitiveness. In December 2025, another executive order established an AI Litigation Task Force within the Department of Justice and directed the Commerce Department to review state AI laws for potential federal preemption.

The private sector has responded at scale. AI captured roughly 50% of all global venture capital in 2025, with total AI investment reaching USD 202.3 billion, a 75% increase year-on-year. The Stargate Project, announced in January 2025, represents a USD 500 billion commitment over four years to build AI infrastructure in the United States, with five confirmed data center sites and the flagship Abilene, Texas, campus already operational. These are not planned investments. They are deployments already underway, at a scale no other jurisdiction is currently matching.

Yet the absence of a comprehensive federal framework has not produced regulatory clarity. It has produced fragmentation. States are advancing their own AI laws across deepfakes, employment, healthcare, and digital identity, filling the vacuum the federal government has deliberately left. The trajectory mirrors U.S. privacy regulation, where federal legislation stalled repeatedly and a state-by-state patchwork emerged in its place. California’s CCPA was followed by frameworks in 20 other states, each with its own scope, thresholds, and requirements. AI regulation is following the same path. In 2025, more than 1,208 AI-related bills were introduced across all 50 states, with 145 enacted by year-end. The only AI-specific federal statute enacted that year was the TAKE IT DOWN Act, addressing non-consensual intimate imagery. A proposed ten year moratorium on state AI laws was stripped from budget reconciliation by a 99-1 Senate vote, a decisive signal that federal preemption remains politically untenable. The result is the compliance burden that the administration’s deregulatory agenda was meant to eliminate. Companies operating nationally must navigate an expanding range of state obligations while those same laws face federal legal challenges.

EU-US: Philosophical Differences

The divergence between the EU and U.S. approaches reflects a deeper disagreement about the relationship between regulation, risk, and competitive advantage. The EU’s framework is grounded in the precautionary principle: potential harms from high-risk AI systems should be identified, assessed, and mitigated before deployment. The current U.S. federal posture inverts this logic. Regulation is framed not as a safeguard but as a friction cost, and the January 2025 AI Action Plan is explicit about the stakes, positioning the U.S. in a race to “maintain American AI dominance” against geopolitical rivals. On that framing, speed and scale are the policy and regulatory caution is the risk.

For global companies, these differences create a bifurcated compliance landscape. Organizations operating in both jurisdictions must account for the EU’s centralized, risk-tiered framework alongside the more decentralized and evolving U.S. environment. The implications extend beyond transatlantic coordination. The Brussels Effect has historically relied on the size of the EU market and the absence of a competing regulatory model of similar scale. The emergence of a distinct U.S. posture, combined with substantial infrastructure and investment momentum, introduces a challenge to the EU’s role as the default setter of global AI standards.

Conclusion

The current moment represents not a temporary divergence but the consolidation of two distinct and increasingly entrenched regulatory models. The European Union is refining a comprehensive regulatory architecture, under real competitive pressure, while expanding public investment to strengthen its AI capacity. The U.S. is prioritizing capital deployment and infrastructure at scale, with regulatory authority fragmented across federal agencies, executive orders, and an expanding body of state law.

Neither framework is static, and both continue to evolve in response to economic, political, and technological developments. For policymakers and government affairs professionals, the central reality is increased complexity. Rather than convergence around a single global standard, companies are likely to operate within a dual-track system for the foreseeable future. What has arrived is a more dynamic, contested, and politically conditional environment, one that rewards active engagement over passive compliance and strategic positioning over reactive adaptation.

In this environment, Trilligent can serve as a strategic partner, helping companies navigate regulatory divergence, align business and policy strategy across jurisdictions, and convert complexity into a competitive advantage through informed, forward-leaning engagement.

RELATED ARTICLES