News & Events

The AI Assurance Ecosystem: Maintaining Momentum

Trilligent
Trilligent
Jul 24, 2025 / 7 min read

As AI systems rapidly proliferate across industries and regulatory approaches diverge globally, we asked: how can we build and maintain robust assurance frameworks that both protect against harm and enable innovation? 

This pressing question brought together diverse stakeholders on July 17, 2025, as All Tech Is Human, techUK, and Trilligent co-hosted an invite-only workshop in London under Chatham House Rule. The event convened influential voices from across the responsible AI landscape to tackle the growing disconnect between technical assurance practices, boardroom priorities, and regulatory requirements, with the goal of identifying practical paths forward.

Setting the Stage: The UK’s AI Assurance Journey

The workshop began with an insightful briefing from Amy Dickens, AI Assurance Lead at the Department for Science, Innovation and Technology (DSIT). Despite the UK having the third-largest AI market globally, only one in six UK firms report using AI, citing financial constraints, strategic uncertainty, and ethical concerns as key barriers. AI assurance plays a crucial role in addressing those barriers, building justified trust and confidence, which in turn drives responsible adoption and economic growth. Three key lessons from DSIT’s recent work emerged: the challenge of terminology across different audiences, the varying maturity levels across sectors and firm sizes, and the growing interest in standards as enablers of both regulatory compliance and adoption confidence. The presentation established a foundation for the day by underscoring the need for bespoke engagement strategies that address diverse stakeholder needs rather than one-size-fits-all approaches, a point that was echoed throughout the day.

Panel Insights: Navigating the Assurance Landscape

The event continued with a dynamic panel featuring James Kell (Robotics Technical Director, Amentum & member of Trilligent’s Advisory Board), Andrew Strait (Head of Societal Resilience, AI Security Institute), and Rafah Knight (CEO & Founder, SecureAI), moderated by Rebekah Tweed (Executive Director, All Tech Is Human). 

The discussion revealed significant challenges organizations face in implementing AI assurance, including supply chain vulnerabilities where companies often discover their AI services aren’t as secure or controlled as they assumed. Panelists highlighted the prevalence of “shadow AI,” where employees use AI tools without formal approval or without the knowledge of their employers, noting that while official statistics suggest limited AI adoption, the reality is likely closer to 80-90% of companies using AI through unofficial channels. The conversation also addressed the deteriorating public perception of AI, with recent studies showing increasing concern across various applications, making the case for assurance as a critical trust-building mechanism. Panelists shared practical approaches to embedding assurance in organizations, from creating digital mockups for regulatory communication to developing specific AI assurance controls that guide vendor selection in sensitive sectors like healthcare.

Workshop Findings: Driving Assurance Forward

Our workshop was split into two parts, each organized by guiding questions. The first breakout session explored what is driving AI assurance forward and how to transform it from a compliance exercise into a strategic advantage. Participants identified several critical barriers, including insufficient knowledge of AI capabilities across organizations, unclear leadership accountability for AI governance, and the persistent view of assurance as merely a compliance requirement. Many organizations struggle with fundamental questions like who leads AI assurance initiatives and who bears responsibility when systems fail, as well as the tension between the C-suite’s attempts at creating AI strategies and the lack of public trust in lawmakers to effectively regulate AI at this stage. The discussion highlighted the importance of tailoring communication approaches to specific stakeholder concerns – for example, focusing on risk management for executives, integration with existing workflows for technical teams, and ROI for investors. Participants emphasized the need to address AI literacy gaps, particularly in SMEs, and to reframe assurance as an enabler of innovation rather than a barrier. A recurring theme was the challenge of defining what precisely is being assured when technologies and risks are constantly evolving, suggesting that flexible, principles-based approaches may be more effective than rigid frameworks.

The second breakout session addressed how to maintain momentum for AI assurance despite shifting attention cycles and political headwinds. Participants emphasized the importance of professionalizing the field through recognized standards, certifications, and educational pathways that establish credibility and provide shorthand for good practice. Multidisciplinary approaches, for example, would aim to bring together technical, ethical, legal, and domain expertise to address the complex challenges of AI assurance. Many participants called for embedding ethical considerations in educational curricula from school through professional development, creating a culture where questioning AI systems is normalized rather than discouraged. The group also discussed the role of sandboxes and simulated regulatory environments that allow organizations to test assurance approaches without risk, learning from both successes and failures. A tension emerged between the need for standardized approaches across sectors and the reality that AI applications vary dramatically in their context and risk profile, suggesting that principles-based frameworks with sector-specific implementations may be most effective. 

Throughout both sessions, participants emphasized that building public trust in AI requires transparent communication about both capabilities and limitations, but that there is a long road ahead as AI suffers from serious public skepticism.

Looking Forward: Building a Resilient Assurance Ecosystem

As the workshop concluded, three key themes emerged as critical for maintaining momentum in AI assurance. 

  1. First, the false dichotomy between oversight and innovation must be challenged – effective regulatory guardrails don’t constrain innovation but rather provide clarity that enables faster, more confident deployment. 
  2. Second, multidisciplinary collaboration across sectors is essential for developing effective assurance approaches, across corporates, academia, and regulation, that balance technical expertise with ethical consideration. 
  3. Finally, improved AI literacy at all levels – from boardrooms to classrooms – is needed to build a common understanding of AI capabilities, limitations, and risks.

Shaping the Narrative & How Trilligent Can Help

The challenges identified in our workshop highlight why proactive engagement is critical for organizations developing or deploying AI to partake in the regulatory debate and help create a legal framework that is also conducive to innovation and growth. As regulatory approaches diverge globally and public trust wavers, communicating effectively about assurance practices has become as important as the technical implementations themselves. Organizations that can clearly and consistently articulate how their assurance frameworks address risks while enabling innovation will gain competitive advantage, build stakeholder trust, and help shape evolving policy frameworks. 

Trilligent’s global team and advisory board support organizations navigating this landscape by combining policy expertise across the EU, UK, and US regulatory environments with strategic communications capabilities. We help clients develop targeted messaging frameworks, engage effectively with regulators and policymakers, and create thought leadership platforms that establish their voice in this evolving space. Whether you’re looking to enhance how executives communicate about responsible AI or develop engagement strategies that build trust among policy stakeholders, we can help you translate technical assurance concepts into narratives that resonate with diverse audiences. 

About the Organizers

All Tech Is Human is a non-profit organization founded in 2018 and based in Manhattan with a global community tackling the world’s thorniest tech & society issues. With a network of over 50k individuals across civil society, government, industry, and academia, ATIH is committed to collective understanding, involvement, and action. 

techUK is the trade association which brings together people, companies and organisations to realise the positive outcomes of what digital technology can achieve. We create a network for innovation and collaboration across business, government and stakeholders to provide a better future for people, society, the economy and the planet.

Trilligent is a global strategic advisory firm with a presence across North America and Europe. Focused on the tech sector and emerging and disruptive technology, Trilligent provides public affairs, regulatory advice, and strategic communications support to tech organizations of all sizes.

RELATED ARTICLES