A Tale of Two Policies: The EU AI Act and the U.S. AI Executive Order in Focus

Multi Authors
Mar 26, 2024 / 5 min read

EU vs. U.S. AI Regulation: A Comparative Insight 

 On March 13, 2024, the European Parliament endorsed the EU’s Artificial Intelligence Act (AI Act), marking a significant step towards the first comprehensive legal framework for AI regulation within the EU. This move underscores the EU’s commitment to ensuring AI technologies are developed and used in a way that respects citizens’ rights and public safety, while fostering innovation and trust. The Act categorizes AI systems by risk, imposing stringent requirements on high-risk applications to mitigate potential harm to health, safety and fundamental rights. Notably, it bans certain uses of AI, like real-time remote biometric identification in public spaces, reflecting the EU’s structured approach to balancing innovation with public interest. 

Across the Atlantic, President Biden issued an Executive Order (EO) on AI on October 30, 2023, aiming for the safe, secure and trustworthy development and use of AI. Unlike the EU’s detailed legal framework, the EO adopts a principles-based approach, encouraging responsible AI development through broad guidelines that emphasize safety, innovation and ethical considerations. It lays out priorities such as enhancing AI safety and security, promoting innovation and protecting privacy, without detailing specific regulations. This reflects a more flexible regulatory environment, encouraging voluntary compliance and industry-led standards. 

Common Grounds and Divergences 

Both the EU and the United States recognize the transformative potential of AI and the importance of managing its risks. This shared understanding has led to the adoption of a risk-based approach, emphasizing the need to scrutinize and regulate AI applications deemed high-risk due to their potential impacts on individual rights, safety and societal values. There is an agreement on the necessity of: 

  • Rigorous testing and monitoring. Both stress continuous evaluation of AI systems to ensure they are safe, reliable and perform as intended, from pre-deployment testing to post-market surveillance. 
  • Privacy and data protection. Despite differing legal frameworks, with the EU integrating its AI Act with the General Data Protection Regulation (GDPR) and the United States lacking a federal privacy law, both stress the importance of protecting individuals’ data and privacy in the development and deployment of AI systems. 
  • Cybersecurity. Acknowledging the vulnerabilities AI systems can introduce, both the EU and the United States emphasize the need for robust cybersecurity measures, advocating for “security by design” to protect against misuse and external threats.

While the foundational goals align, the pathways chosen by the EU and the United States to achieve these goals reveal significant differences: 

  • Regulatory frameworks. The EU’s AI Act provides a comprehensive legal structure, clearly defining obligations, prohibitions and enforcement mechanisms for AI systems based on their risk level. In contrast, the EO adopts a more advisory role, promoting principles and encouraging voluntary industry standards without imposing specific legal requirements. It is worth noting that traditionally the progress of U.S. tech policy has been a state-led effort considering states’ abilities to enact legislation quicker than the federal government. Therefore, we could anticipate that states will also lead the way on AI regulations in the absence of a federal law. 
  • Enforcement mechanisms. The EU has established a stringent enforcement regime, with the potential for significant fines, signalling a strong commitment to compliance. The U.S. approach, lacking explicit penalties for non-compliance, relies more on the influence of guidelines and the commitment of industry stakeholders to self-regulate. 
  • Scope and application. The EU AI Act aims for uniformity, seeking to apply a single regulatory framework across all member states, thereby reducing fragmentation. The U.S. strategy, driven by executive action, potentially leads to varied interpretations and applications across sectors, influenced by individual departmental initiatives and priorities. 


The distinctions between the EU and U.S. approaches to AI regulation reflect deeper philosophical and practical differences in governance, legal tradition and attitudes towards technology regulation. The EU’s structured, comprehensive approach contrasts with the United States’ flexible, principles-based strategy, each embodying a vision of how innovation and societal values can coexist. Yet, these differences do not overshadow the shared commitment to ensuring AI serves the public good, respects human rights, and fosters trust and safety in technology. 

The Transatlantic Trade and Technology Council (TTC) stands out as a critical forum for dialog and cooperation, bridging Atlantic divides. However, the future of this collaboration faces uncertainties, considering the potential shifts in policy direction following the upcoming U.S. elections. Crucially, AI governance cannot be detached from its global context. International efforts, particularly those spearheaded by the Organisation for Economic Co-operation and Development (OECD) and the Group of Seven (G7), including the pioneering Hiroshima AI Process initiated under Japan’s G7 presidency in 2023, play essential roles in agreeing on a coordinated approach to AI governance globally. In addition to these initiatives, the United Nations General Assembly recently adopted the first global resolution on artificial intelligence, further emphasizing the universal commitment to responsible AI development and governance. 

As AI technologies continue to evolve, the ongoing dialogue between global powers will be crucial in shaping a cohesive, responsible and dynamic regulatory landscape that can adapt to emerging challenges and opportunities. Companies will need to stay agile and well-informed within this evolving regulatory landscape, not just for compliance but as a strategic imperative for innovation and leadership in a responsible AI ecosystem.  

For those seeking to navigate AI regulatory trends and their potential impact, as well as engage in the public debate about pioneering AI solutions, Trilligent provides expert guidance and strategic insights. Reach out to learn how we can help your organization navigate these changes and harness the opportunities they present. 


Regulating Artificial Intelligence in the EU and US
Alex Wagner
May 23, 2023 / 5 min read