Perspectives

2024 U.S. Tech Policy Outlook

Multi Authors
Apr 16, 2024 / 7 min read

Every year introduces innovations meant to drive society forward. As new ideas and technologies emerge, we often find ourselves needing more policies to help us navigate the ever-changing tech space. Tech policy progress in the United States is largely state-driven, as states tend to outpace the federal government in enacting legislation. In 2023, 65 tech-related laws were passed at the state level, covering topics including data privacy, youth safety online and artificial intelligence. Compare that to all 27 laws passed by the federal government in 2023 and the increasing polarization between political parties, and you begin scratching the surface of the challenges facing America’s progress. Much like last year, 2024 is already seeing several states moving forward on tech legislation while federal policy progress stalls ahead of November elections. 

Data Privacy

In the U.S., privacy regulations have primarily been driven by individual states rather than a comprehensive federal law. Unlike other countries and regions, the U.S. lacks a single overarching privacy law. As a counterbalance, states have forged their own paths in addressing data protection and privacy. Notably, California has been at the forefront, pioneering significant privacy legislation and setting the tone for others to follow.  

On February 28, 2024, the White House issued an executive order in an attempt for a more cohesive approach. This order expands the scope of the national emergency declared in previous orders related to data security. It authorizes the U.S. Attorney General to prevent large-scale transfers of Americans’ personal data to countries of concern, such as China, North Korea and Iran with the goal of safeguarding sensitive personal data and protecting national security interests. While this order is a significant step, the absence of a comprehensive federal privacy law remains a challenge, forcing companies to manage varying state requirements.  

In 2019, the U.S. data privacy landscape underwent a notable change with the introduction of the California Consumer Privacy Act (CCPA). The CCPA imposed substantial compliance requirements on businesses that handle personal information of California residents. Since then, momentum has grown across several states that recognized the need to manage data privacy risks. In 2023 alone, eight states passed comprehensive consumer privacy laws, and by 2026, 13 state privacy laws will take effect. Today, California, Virginia, Colorado, Connecticut and Utah enforce their own consumer privacy laws with more states like Maryland, Florida and others set to soon add to the patchwork of state-led privacy frameworks. But earlier this month, Senate Commerce Committee Chair Maria Cantwell (D-WA) and House Energy and Commerce Committee Chair Cathy McMorris Rodgers (R-WA) introduced the American Privacy Rights Act, a bipartisan and bicameral draft piece of legislation aimed at establishing a federal data privacy standard. If passed, the law would preempt state consumer privacy laws while preserving certain aspects such as data minimization and private right of action. However, given the downward trend of congressional productivity, there’s a good chance for delays in the legislative process. 

The ongoing conversation around consumer privacy strongly suggests that 2024 will see growing trends in data minimization and disclosure requirements against companies that collect, store and/or use consumer data. Interestingly, no state, except California at the time of this writing, enforces a private right of action under their state consumer privacy law but this could change. Nevertheless, businesses that handle consumer data should brace themselves for increased levels of regulatory scrutiny and enforcement at both state and federal levels.  

Youth Online Safety 

Privacy and mental health concerns for children and teens who use social media platforms continue to grow. Last year, several states filed lawsuits against social media companies alleging harms against youth mental health. Today, the charge against protecting young people from the harmful effects of certain digital platforms remains strong. Although federal lawmakers have been working on bipartisan legislation geared towards protecting families and their children online, the misaligned patchwork of federal and state regulations, and the uncertainty of if and when a national standard will ever become law has compelled states to grapple with the issue on their own.  

In recent years, at least 15 states have enacted or are currently pursuing legislative measures to protect youth on digital platforms. This movement by states began in 2022, when California introduced the ‘California Age-Appropriate Design Code Act’ (AB 2273, 2022), requiring online platforms to consider the best interest of child users and to default to privacy and safety settings that protect children’s mental and physical health and wellbeing. Since then, Florida (HB 3), Utah (HB 464), Vermont (SB 289) and other states have or are currently pursuing similar initiatives that primarily rely on age verification and parental consent. But these efforts to protect are not without pushbacks. International digital rights organization Electronic Frontier Foundation publicly opposes age verification laws because it risks online anonymity. Similarly, NetChoice, a tech industry group representing some of the largest social media companies, have challenged states, including California, Arkansas, Utah and Ohio, in court for allegedly violating First Amendment rights through their online safety mandates. 

While groups tussle over an equitable approach, the goal of safeguarding youth online remains paramount. Not only are social media companies put on notice, but consumer tech companies, app developers and many other players across the digital landscape should be mindful of the oncoming wave of child online safety policies that may impact parts of their businesses. We should expect more legislative proposals this year that focus on age verification, parental consent, data minimization and duty of care requirements imposed on businesses whose products reach kids and teens. And until the courts rule on the constitutionality of some of these measures, it would benefit businesses to stay proactive and operate as if these policy trends will soon become law.  

Artificial Intelligence 

On October 30, 2023, the White House issued an executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The order recognizes the potential of AI for both technologies and emerging risks. It emphasizes the need for responsible AI use to address pressing challenges and outlines eight guiding principles for the development and use of AI. These principles range from ensuring safety and security, promoting innovation and competition, to advancing equity and civil rights and strengthening American leadership abroad. However, it is important to note that executive orders cannot create new laws or regulations on their own, but they can set the stage for progress. 

Several legislative proposals are currently under consideration, addressing various aspects of AI, such as transparency, deepfakes and platform accountability. However, it remains to be seen which, if any, of these proposals will gain traction this year. 

States like California (AB 302, 2023), Connecticut (SB 1103, 2023), Louisiana (SCR 49, 2023) and Vermont (HB 410, 2022) have enacted legislation to protect individuals from unintended consequences of AI systems. These states have taken an incremental approach, passing laws that address specific policy concerns or establishing entities to study the potential impacts of AI and make policy recommendations. Additionally, Utah has distinguished itself as one of the first states in the nation to pass legislation aimed at regulating AI. The state has enacted the AI Policy Act (SB 0149), which will take effect on May 1, 2024. The Act defines terms like “generative artificial intelligence” and “regulated occupation,” which helps standardize these concepts. The Act establishes liability for consumer protection laws violations through AI, mandates disclosure when individuals interact with AI in regulated occupations and proposes a detailed roadmap for government-led AI oversight. 

Looking ahead, we can expect a risk-based federal approach to AI regulation like the EU’s AI Act. The upcoming 2024 U.S. presidential election will surely touch upon issues related to generative AI, social media platforms and misinformation and what measures we can take to mitigate harms stemming from our technological advancements. 

The dynamic tech policy space underscores the urgent need for comprehensive regulations to address data privacy, youth online safety and AI in the U.S. While some states lead the pack on certain legislative issues, the absence of unified federal standards can pose challenges for innovation. As these issues continue to evolve, it’s crucial that disruptive and fast-moving brands stay ahead of current policies and are prepared with actionable market, media and public affairs strategies to help navigate the complex tech landscape. Companies seeking regulatory support can rely on Trilligent’s expert guidance and strategic insights. Reach out to our team to learn more about navigating today’s tech policies and harnessing the opportunities they present.

RELATED ARTICLES