Perspectives

Regulating Artificial Intelligence in the EU and US

Alex Wagner
May 23, 2023 / 5 min read

2023 has been dominated by the rise of new generative Artificial Intelligence (AI) systems with text models (ChatGPT and LLaMA) and multimodal models (GPT-4 and DALL-E) changing the way we engage with technology. With generative AI estimated to drive a 7% increase in global GDP over the next decade, businesses can no longer afford to ignore, or misunderstand, this disruptive turning point that looks likely to alter the way we work. Despite this transformative potential, a lack of adequate regulation has raised concerns that this new technology could unleash new risks for individuals and organizations, with potential issues ranging from copyright and IP infringement to harmful malware and disinformation.

Concerns surrounding user data have already seen OpenAI announce a set of privacy controls to resume operation in Italy after the country’s data regulator issued a temporary ban on the company’s use of Italian user data, while tech leaders have publicly called for a pause in AI development citing profound risks to society. Despite the lack of consensus on a single definition of AI, Stanford University’s 2023 AI index Report shows an increase in the number of bills containing “artificial intelligence” passed into law from just one in 2016, to 37 in 2022.  With regulators seeking to reign in this transformative new technology, it is more important than ever for businesses to anticipate new requirements and obligations that will impact their operations.

The EU

Proposed by the European Commission in April 2021, the AI Act sets out harmonized rules for AI developers, deployers and users, with the Act’s risk-based approach classifying AI systems into four tiers:

  • Unacceptable Risk: AI applications whose purpose is considered a clear threat to the safety, livelihoods and rights of people including the use of manipulation in a manner likely to cause physical or psychological harm or social scoring.  
  • High Risk: AI applications that create adverse impact on people’s safety including AI used in critical infrastructure, educational training that may determine access to education, law enforcement systems and administration of justice and democratic processes (as set out in Annex 3)A range of mandatory requirements will apply to all high-risk systems including adequate risk assessment and appropriate human oversight to minimize risk.
  •  Limited Risk: AI applications such as chatbots, emotion recognition and systems generating synthetic or deepfake content will have to meet specific transparency obligations.
  • Minimal Risk: AI applications including AI enabled video games or spam filters with free use.

The AI Act will work alongside the Digital Markets Act and the Digital Services Act to regulate algorithms and AI in business and organizational practices and together will help ensure companies are not misusing AI or leveraging innovative tech to promote harm.

The European Council adopted its position in December 2022, adding new provisions to account for general purpose AI systems that cannot be placed within a particular risk category, with the European Parliament agreeing a provisional position that introduces new obligations for generative AI models to be developed in accordance with EU law and fundamental rights. Negotiations between the EU institutions are expected to begin after the Parliament formally adopts its negotiating position, paving the way for the world’s first AI rulebook.

The US

In October 2022, the White House published its Blueprint for an AI Bill of Rights, a set of five principles and associated practices to help guide the design, use and deployment of automated systems.

In contrast to the EU, The Bill of Rights is non-binding and relies on developers and designers to voluntarily apply the framework to protect citizens from harms, setting out five principles to address concerns and help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities or access to critical needs:

  • Safe and effective systems to ensure users are protected against unsafe and ineffective systems
  •  Algorithmic discrimination protection that proactively takes action to prevent algorithms from discriminating against particular groups
  • Data privacy systems to protect against abusive data practices
  • Notice and Explanation that will inform users that an automated system is being used
  • Human alternatives that enable users to opt out of automated systems and receive support to resolve any issues

The AI Bill of Rights is unlikely to become more than a voluntary framework though it will work alongside the Algorithm Accountability Act which will require companies to assess the impact of automated systems while State-level initiatives are also working to address and mitigate harms including in CaliforniaIllinois and Texas.

On May 4, the White House also hosted a meeting of AI tech leaders and announced new actions to promote responsible AI Innovation including new investment for responsible AI R&D and public assessment of generative AI systems by leading developers including Google, Microsoft, OpenAI and NVIDIA.

Meanwhile, the US National Telecommunications and Information Administration announced on April 11 a public consultation on AI Accountability Policy seeking feedback on what policies can support the development of AI audits, assessments, certifications and other mechanisms, while Senate Majority Leader Chuck Schumer has also drafted a framework for a new regulatory regime to address national security and education concerns.

RELATED ARTICLES