A Conversation with Siyanna Lilova, Co-Founder & CEO at Curated AI
Much has been written on the European Union’s AI Act following its adoption in May. The legislation aims to regulate the development and deployment of AI systems with an emphasis on transparency, accountability, and ethical considerations. We spoke to Siyanna Lilova, who talked on the topic at our May Trilligent Tech Talk in Berlin, to hear more about her perspective on the AI Act. Siyanna is an IT lawyer, and co-founder and CEO at Curated AI, an AI assistant for privacy and IT law.
Trilligent: As a co-founder of an AI startup, what main compliance challenges do you see, especially for smaller companies such as your own?
The first challenge, especially for smaller companies, is understanding where to start and how to identify which rules and obligations under the AI Act apply to them. Once that is clear, the most significant compliance hurdle falls on providers of high-risk AI systems, who are subject to the most stringent regulatory requirements. These providers must implement risk management procedures to identify, evaluate, and mitigate risks throughout the AI system’s lifecycle. They must also establish robust data governance and management practices to ensure the quality of datasets, maintain thorough documentation, and ensure proper logging, among other responsibilities. Deployers of high-risk AI systems face their own set of requirements, including the obligation to conduct a Fundamental Rights Impact Assessment (FRIA) before deploying the AI system.
Compliance with these obligations demands considerable resources in terms of finances, time, and personnel. Moreover, a significant challenge at the outset will be determining how to meet these requirements, many of which remain unclear. Companies and compliance professionals are currently awaiting 39 pieces of secondary legislation from the EU Commission, which should provide some much-needed clarity, including Delegated Acts, Implementing Acts, guidelines, templates, and Codes of Practice.
Trilligent: In your current role as co-founder of Curated AI, have you encountered any challenges related to the AI Act?
As a co-founder at a legal AI start-up and an IT lawyer, I’m following the developments related to the AI Act very closely. This is especially so for the new rules related to general-purpose AI as many AI start-ups, including ours, are integrating and building their own services on top of such models. Right now, the main measures we take are related to transparency, making sure to inform our customers about what types of models we’re using, how they’re being trained and, on which data.
The level of risk associated with an AI system is a key factor in determining your obligations under the AI Act. The most stringent compliance requirements apply to high-risk AI systems – those that pose a “high risk to health, safety, environment, and fundamental rights.” Examples include systems used to assess eligibility for credit, health insurance, or public benefits, as well as AI used in job application processes or product safety components.
We’ll be assessing our own risk level and designing our system in a way that avoids its application in any of the high-risk use cases under the AI Act. As long as we don’t fall under the high-risk category, I believe we won’t face any serious challenges by the new regulation. We’ll also be looking forward to future guidelines on the Act’s scope and application by the EU Commission.
Trilligent: Do you think the AI Act will affect the EU’s competitiveness on the global stage and stifle innovation in the EU, especially in comparison with the US and Chinese AI regulation?
This is not an easy question to answer. On one hand, the AI Act’s stringent requirements, particularly for high-risk AI systems, could place a heavy burden on companies, especially smaller ones. These firms may find themselves expending considerable resources to comply with the regulation, which could slow down innovation and reduce the EU’s agility in AI development compared to the US and China. The US, with its more flexible regulatory approach, and China, with its state-driven model, might allow for faster experimentation and deployment of AI technologies, potentially giving them a competitive edge.
On the other hand, the AI Act could also foster a more trustworthy and ethical AI ecosystem within the EU, as originally intended by the EU Commission. By prioritizing transparency, data governance, and the protection of fundamental rights, the AI Act has the potential to set a global standard for responsible AI. This could attract businesses and consumers who value these principles, ultimately enhancing the EU’s competitiveness in the long term.
However, much will depend on the secondary legislation and how the Act is enforced by national authorities. If the rules are overly restrictive or unclear, they could indeed stifle innovation. Conversely, if they strike the right balance, the AI Act could position the EU as a leader in ethical AI, offering a competitive advantage in a world increasingly concerned with the societal impacts of technology.