News & Events

Trilligent Tech Talk in Brussels: Building Trustworthy AI Governance Across Borders

Multi Authors
Mar 04, 2026 / 6 min read

Trilligent continues its global Trilligent Tech Talks series, this time in partnership with CredoAI, bringing together innovative minds to discuss AI governance across borders. Our fifth Brussels event, held on February 4, 2026, explored how global stakeholders are turning AI principles into effective and reliable oversight on an international scale. To dive even deeper in the regulatory opportunities and challenges the sector faces, we also hosted a closed-door industry roundtable on the same topic.

The Core Paradox: Simplification Without Simplism

The European Union (EU) AI Act faces a fundamental tension: the need for simplification and clarification without introducing novelties that compromise the framework’s legitimacy or sliding into de facto deregulation. The Digital Omnibus proposal signals recognition that implementation has outpaced institutional readiness (delayed standards, missing conformity assessment bodies, and guidance gaps). Yet cutting out provisions is not inherently simplification. The goal must be to make sense of what remains genuinely useful while preserving accountability architecture.

Beyond simplification, interoperability also emerges as the decisive lever. The EU faces fragmentation challenges: divergent guidance interpretation, enforcement landscape complexity, and the messy reality of multiple overlapping regulatory hooks (AI Act, DMA & DSA, GDPR, sectoral laws, cybersecurity frameworks). The Digital Omnibus must be read holistically, as changes to AI provisions cannot be isolated from GDPR amendments, and the interplay with adjacent regulations (DSA, Cyber Resilience Act, sector-specific rules) requires clearer enforcement architecture.

The Standards-Compliance Nexus

A critical implementation bottleneck that remains is the coupling of compliance timelines with standards promulgation. The Omnibus proposal to delay high-risk AI rules until adequate standards exist is pragmatic but creates a self-fulfilling prophecy risk: if standards-setting activities determine regulatory application, incentive structures may skew toward delay rather than acceleration. The conformity assessment infrastructure also remains underdeveloped. Capacity is the missing variable; both institutional capacity within Member States and technical capacity for meaningful oversight.

Internationally, the EU needs a more strategic posture in standards-setting activities. Current approaches risk insularity. Genuine global influence requires engaging international standards bodies, not as regulatory exporters, but as collaborative architects.

Sector-Specific Realities and the Adoption Gap

AI governance cannot remain in horizontal abstraction. Sectoral co-regulatory approaches are essential for operationalizing requirements. The AI Act’s risk-based architecture intersects with sector-specific dynamics (healthcare, financial services, automotive, critical infrastructure) that demand tailored implementation pathways. The Apply AI Strategy angle recognizes this: boosting competitiveness requires governance that enables deployment, not governance that inadvertently constrains it.

As seen with the Digital Decade program, the problem does not stem from overarching principles, but at the level of execution. We have a capability gap – the infrastructure may be too limited to support increasing demand. While there is progress in industrial AI uptake, the challenge remains in scaling these solutions. The depth and scale are still insufficient to drive significant productivity gains.

Frontier AI and Emerging Risk Architecture

The Codes of Practice work on general-purpose AI models surfaces a critical gap: systemic and emerging risks require governance mechanisms that current frameworks don’t fully address. Agentic AI systems (with their autonomous decision-making and environmental interaction) demand more deterministic approaches than the Act’s current architecture anticipates. The frontier AI challenge intersects with cybersecurity considerations: AI-enabled threats require governance that bridges the AI Act and broader security frameworks.

The Original Sin of Regulatory Timing

A meta-challenge pervades AI governance: the two-factor timing problem. Regulate too early and you lack evidence and knowledge. Wait too long and you lose room to maneuver as market realities calcify. The AI Act represents a policy experimentation gambit, an attempt at anticipatory governance that inevitably requires recalibration as understanding deepens.

The Omnibus must therefore distinguish between architectural elements (prohibited practices, risk classification logic, fundamental rights protections) that cannot be softened without legitimacy costs, versus implementation mechanics (timelines, documentation formats, registration procedures) where pragmatic adjustment serves the framework’s objectives.

Toward Pragmatic Governance

  1. Predictability as a regulatory lever: Market actors need stable signals. Swift adoption of clarifications matters more than perfect guidance. Ambiguity carries higher costs than imperfection.
  2. Flexibility and recalibration by design: The AI Act should be understood as dynamic architecture requiring continuous adjustment, not a static compliance checklist. Futureproofing (and hype-proofing) means building in mechanisms for evolution.
  3. Responsible AI governance as strategic positioning: For organizations, AI governance is cross-functional. Demoting it undermines both compliance and competitive advantage. For regulators, governance-by-design approaches create implementation pathways that serve both oversight and innovation objectives.
  4. Capacity building as a precondition: Neither compliance nor enforcement functions without institutional capacity. This applies equally to Member State authorities, conformity assessment bodies, and the AI Office itself.
  5. Hype-proof architecture: Governance frameworks must withstand cycles of technological enthusiasm and disillusionment. The test is whether rules remain coherent and enforceable as the technology landscape shifts.

The global stakes of EU AI governance extend beyond Brussels. Data protection laws exist nearly everywhere; comprehensive AI laws do not. This creates both opportunity and responsibility: the EU’s choices will shape global norms, but only if implementation demonstrates workability.

____

Trilligent is a global strategic advisory firm, with presence in the U.S., the UK, Germany and Brussels, among others. At Trilligent, we closely follow the evolving landscape of tech and emerging tech, including AI Governance and the applicable EU regulatory rulebooks. Our global team is well-positioned to help clients navigate these complex frameworks and ensure their voice is heard by policymakers and influential stakeholders in the discussions shaping the future of the sector. Reach out if you’d like to explore how these regulations impact your business or stay ahead of the curve.

____

Credo AI is a pioneering, comprehensive AI governance platform designed to help large enterprises and government agencies manage, monitor, and deploy artificial intelligence responsibly at scale. Founded in 2020, the company provides a “system of record” that bridges the gap between technical teams and compliance officers, automating the enforcement of responsible AI policies, risk management, and regulatory compliance (e.g., EU AI Act, NIST) throughout the entire AI lifecycle. Named a leader by industry analysts, Credo AI offers features like AI registry, Generative AI guardrails, and automated, actionable, and auditable risk reports to ensure safety, fairness, and transparency in high-stakes environments.

___

SPECIAL THANKS…

…to our expert speakers at our panel discussion:

  • Eline Chivot – Policy Analyst at DG Connect, EU Commission
  • Laura Caroli – AI Governance and Tech Regulation Expert
  • Norberto de Andrade – Trilligent Advisory Board Member and AI Governance expert
  • Vasileios Rovilos – EU Policy Director at CredoAI
  • Lusine Petrosyan – moderator, Associate Director at Trilligent

And distinguished roundtable participants from the European Data Protection Supervisor (EDPS); Computer & Communications Industry Association (CCIA); Salesforce; IBM; Centre for Future Generations (CFG); Scrydon; Brussels Privacy Hub; Future of Privacy Forum; IAPP; Tech UK; Credo AI and Trilligent Advisory Board.

___

Vasileios Rovilos, EU Policy Director at CredoAI, contributed to this piece. 

RELATED ARTICLES