
The EU AI Act & Your Nonprofit: What You Need to Know (and Do)
Artificial intelligence is already embedded in how nonprofits operate — from writing reports and analyzing impact data to optimizing internal workflows. But while AI use is surging, the rules governing it are still catching up. Now, with the introduction of the European Union’s Artificial Intelligence Act — the first comprehensive legislation of its kind — nonprofits, especially those working internationally or using public-facing AI tools, need to understand not just what the law says, but how to act responsibly in the face of it.
What is the EU AI Act?
The EU AI Act is a landmark regulation designed to promote safe, transparent, and ethical use of artificial intelligence. It applies to any AI system used in the EU, regardless of where the provider is based — which means even nonprofits headquartered elsewhere may fall under its scope.
The law uses a risk-based model, grouping AI systems into four categories:
- Unacceptable risk – banned altogether e.g., social scoring, manipulative targeting, certain biometric surveillance
- High risk – subject to strict requirements e.g., tools used in hiring, education, healthcare, or law enforcement
- Limited risk – must meet transparency requirements e.g., chatbots, AI-generated content, recommendation systems
- Minimal risk – few obligations, but responsible use encouraged e.g., spam filters, productivity tools
While most nonprofit AI use is likely to fall into the “limited” or “minimal” risk categories, it’s important to understand the implications of higher-risk applications — especially when services relate to access to health, education, or legal support.
Why this matters for nonprofits?
Nonprofits often lead with trust, transparency, and human dignity — values that closely align with what the EU AI Act seeks to protect. But turning those values into practice through compliance can feel daunting, especially for teams without legal or in-house tech expertise.
As part of the AI for Changemakers program — a three-year global initiative supporting 110 nonprofits in responsibly adopting AI, created by Tech To The Rescue (TTTR) and sponsored by Google.org and other great partners — Trilligent recently co-hosted an AI Governance workshop. While many participating nonprofits reported using AI tools like ChatGPT or GitHub Copilot, few were familiar with the EU AI Act itself. This gap is telling: AI adoption is outpacing understanding, increasing both risk and uncertainty.
“This is a defining moment for the social sector,” says Jacek Siadkowski, CEO of Tech To The Rescue. “The AI landscape is changing fast — and nonprofits shouldn’t be left reacting from behind. They should be helping to shape what ethical, inclusive AI looks like in practice. Regulation is coming quickly — but the real question is whether the social sector will shape its own future in AI, or be shaped by it.”
This also represents a major opportunity. Nonprofits don’t just need to react to regulation — they can help shape how ethical AI is implemented on the ground. That’s why building awareness now is critical: every nonprofit using AI should feel empowered to innovate responsibly, and every tech team should be thinking about how to help them do it.
What should non-profits actually do?
Here are six practical steps to help your nonprofit respond to the EU AI Act and related frameworks — without being overwhelmed:
- Take stock of your AI use: List any tools powered by AI — including those built into donor platforms, CRMs, or research software. Understand what they do, and where they operate.
- Assess the risk level: Most nonprofits will be outside of “high-risk” territory. But if you’re using AI in decisions around health, education, or eligibility for services, dig deeper into what the Act requires.
- Be transparent with users: Let people know when they’re interacting with AI. Label AI-generated content, offer opt-outs where possible, and update privacy notices to reflect how AI is used.
- Align with data protection laws: If your AI tool processes personal data, ensure compliance with data laws like the GDPR. This includes having a lawful basis for processing, securing data properly, and being transparent about usage.
- Document your AI systems: Keep basic documentation about how the tools work, what data they use, and who is responsible for monitoring them. This doesn’t need to be extensive — but it should be clear.
- Stay informed: AI regulation is still evolving. Assign someone on your team to track developments, join networks that focus on nonprofit tech policy, or subscribe to relevant updates.
Zooming out: a shifting global landscape
The EU AI Act is part of a broader global trend. From the UN’s Global Digital Compact to national efforts in countries like Canada and India, regulators are moving quickly to define how AI should be governed. While the pace and detail vary across jurisdictions, the direction is clear: compliance expectations are rising, and ethical use is becoming a global norm.
For nonprofits operating across borders or funding internationally, this underscores the need for consistent principles and shared strategies — even in the absence of clear national laws.
Final thoughts
The EU AI Act marks an important shift in how societies — and sectors — approach artificial intelligence. For nonprofits, this isn’t just about staying compliant. It’s about ensuring that the tools used to advance mission-driven work are trustworthy, equitable, and safe for those they aim to serve.
This article builds on insights from a recent AI Governance Bootcamp co-hosted with Tech To The Rescue, exploring what responsible innovation can look like for nonprofits around the world.