Perspectives

Building Consumer Confidence: The Vital Role of Trust and Safety in Artificial Intelligence

Kalia Ataliotou
Jun 28, 2023 / 12 min read
Artificial Intelligence (AI) is transforming industries and reshaping the way we work and interact. With the AI market predicted to reach a staggering $190 billion (USD) by 2025, its rapid advancement brings to the forefront new challenges while raising crucial regulatory and ethical concerns. A survey conducted by McKinsey revealed that consumers exhibit a high level of confidence in AI-powered products compared with products that rely mostly on humans. However, these could result in higher risks including consumer bias. In addition, the survey notes that in most cases consumers are not aware that they are interacting with an AI system. These concerns have sparked unease among regulators and international organizations, prompting a closer examination of the long-term implications of AI systems, particularly concerning issues like “automated bias.” As AI technology continues to progress, establishing a robust foundation of Trust and Safety becomes pivotal for bolstering user confidence. In this blogpost, we delve into the origins of Trust and Safety and its criticality in the realm of AI. Additionally, we explore prominent international laws and guidelines that emphasize ethical and trustworthy AI practices. Finally, we leverage these guidelines as a roadmap to offer recommendations on how AI companies can approach Trust and Safety when building their brand to ensure customer confidence, as well as regulatory trust.

Origins of Trust and Safety and its significance on AI

The concept of Trust and Safety within the tech industry originates from the companies’ introduction of self-regulatory business practices with the aim of minimizing the risk of users’ exposure to behaviors that violate community guidelines. Among other objectives, Trust and Safety guidelines (or policies) have been providing a set of tools to manage content moderation for online platforms. The growing emphasis on Trust and Safety in the tech industry can also be attributed to the increase in users’ concerns regarding privacy and data security. Based on the survey mentioned above, consumers consider trustworthiness and data protection to be nearly as important as traditional factors like cost and delivery time when making purchasing decisions. Out of 13,000 participants, 40% responded that they have ceased doing business with a company after discovering that it failed to protect their customers’ data. Additionally, 14% of respondents have pulled their business from a company because they disagreed with its ethical principles, while 10% did so upon learning about a data breach, even without knowledge of their own data being compromised.

It is also important to note that community guidelines, and Trust and Safety policies in general, have also emerged in recent years in the regulatory context within the discussion of platform neutrality. In the 1990s, when the internet was emerging in the United States, US Courts wanted to preserve the right of speech of individuals on the internet without risking significant legal liability for internet services. As a result, Section 230 of the Communications Act was enacted in 1996. Section 230 made speakers responsible for their own online content, and granted immunity to website publishers from third-party (i.e. user-generated) content. Therefore, tech companies did not have much legal liability for online content published by third parties on their platforms, as they have been largely considered to be neutral platforms.

In recent years, however, regulations like the EU’s Digital Services Act (DSA), introducing provisions on intermediary liability,  or state-level challenges to Section 230 in the US have put the concept of platform neutrality into question. Such a paradigm shift could be contributing to adding more legal compliance within the realm of Trust and Safety. While this happened slowly over time for content moderation, in the case of AI this is happening in a more robust way with governments and institutions being keen to regulate this space (i.e. EU’s AI Act and AI Liability Directive) before it matures. This raises questions about where AI will be positioned within Trust and Safety, and the extent to which it will be governed primarily by laws, self-regulatory practices, or both. Hence, when thinking about Trust and Safety there are two components that should be considered: self-regulatory elements and legal requirements. Companies that prioritize these aspects not only demonstrate their commitment to ethical and responsible practices but also enhance their reputation, boost consumer and regulatory confidence, as well as differentiate themselves from competitors.

Overview of key international AI regulations and guidelines

International AI guidelines could provide an initial roadmap of which direction regulators are heading towards but also help with navigating the way AI platforms can implement Trust and Safety policies.

  • The European Commission‘s Ethics Guidelines for Trustworthy AI: The European Commission published guidelines in 2019, with the aim of offering guidance on fostering and securing ethical and robust AI. The guidelines outline seven key requirements for trustworthy AI: (1) human agency and oversight; (2) technical robustness and safety; (3) privacy and data governance; (4) transparency; (5) diversity, non-discrimination, and fairness; (6) environmental and societal well-being; and (7) accountability. According to the EU’s AI High-Level Expert Group (HLEG), Trustworthy AI has three components: (1) it should be lawful, ensuring compliance with all applicable laws and regulations; (2) it should be ethical, demonstrating respect for, and ensuring adherence to, ethical principles and values; and (3) it should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm.
  • The European Union’s AI Act & AI Liability Directive: The EU institutions are close to adopting the EU AI Act that aims to establish a regulatory framework for AI based on the risks category certain AI technologies belong to – including prohibited AI practices (e.g. remote biometric identification in public spaces), high-risk AI that is subject to stricter requirements (e.g. HR applications to scan candidates), and general AI applications that would follow lighter requirements. Transparency obligations, requirements to provide sufficient information to users, human oversight and data governance are central to the proposal. In parallel, the EU is also working on an AI Liability Directive to address damages caused by AI systems and better protection for victims. However, these two legislative instruments for AI do not sufficiently cover the rise of generative AI and Natural language processing (NLP) technologies. To address this, the EU plans to launch a so-called AI Pact in the form of globally coordinated industry self-regulation.
  • The United States Government’s Blueprint for an AI Bill of Rights: The AI Bill of Rights was developed to establish guidelines for the design, use, and deployment of automated systems. The framework identifies five principles that should be considered when developing automated systems in order to protect American citizens. The five principles are: (1) Safe and Effective Systems – Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective; (2) Algorithmic Discrimination Protections – Take proactive measures to protect individuals from algorithmic discrimination and to use and design systems in an equitable way; (3) Data Privacy – Automated systems should seek users’ permission. Any consent requests should be brief, be understandable in plain language; (4) Notice and Explanation – Users should know that an automated system is being used and understand how and why it contributes to outcomes that impact them; (5) Human Alternatives, Consideration, and Fallback – Users should be able to opt-out from automated systems in favor of a human alternative, where appropriate.
  • The Organization for Economic Cooperation and Development (OECD)  Principles on Artificial Intelligence: The OECD principles emphasize the importance of AI systems being inclusive, transparent, and accountable. Specifically, the OECD notes that AI should benefit individuals and society, promoting inclusive economic growth and well-being. In addition, AI systems should be reliable, transparent, accountable, and auditable. Similarly to the EU’s approach, the OECD also emphasizes the need to build AI systems based on trust. The latest text of the EU AI Act is aligned with the OECD definition of what is considered AI.

Overall, the guidelines highlight the importance of protecting individuals’ privacy and ensuring proper data governance practices when developing and deploying AI systems. Further, the guidelines emphasize the need to ensure that AI systems do not perpetuate discrimination and bias. Both the EU and OECD guidelines emphasize the need to build trust in AI systems. Finally, the guidelines recognize the need for AI systems to adhere to ethical principles and values, demonstrating respect for individuals and society at large.

Building Trust and Safety

Based on the above regulations and guidelines, in this section, we have outlined some of the key areas where AI platforms could work to strengthen their Trust and Safety policies. In our approach, we suggest both self-regulatory elements such as education, content moderation, and advisory, as well as legal requirements such as data protection and user privacy.

  • Ethical Guidelines: Establish clear ethical guidelines that demonstrate the companies’ commitment to responsible and ethical AI practices. For example, Google’s AI Principles outline their commitment to avoid creating or reinforcing biased practices, ensuring that AI is socially beneficial and accountable. Google is a great example of a company that has considered international principles as the one laid above to guide its approach to AI applications. This illustrates the company’s commitment to ensuring the safety of its customers and acknowledges the dangers of AI if misused, which demonstrates accountability: “While we are optimistic about the potential of AI, we recognize that advanced technologies can raise important challenges that must be addressed clearly, thoughtfully, and affirmatively.” Similarly, Microsoft has also published Responsible AI Principles to define product development requirements for responsible AI. Microsoft’s principles align closely with the international guidelines set above. Microsoft AI Principles state that they aim to ensure that their AI systems are fit for purpose in the sense that they provide valid solutions for the problems they are designed to solve. A language that is often used by the European Commission when talking about its digital vision and reflects Microsoft’s commitment to being regulatory compliant.
  • Advisory: Creating an advisory board is one way to augment Trust and Safety within a company, enable the company to gather additional input from across the community using its services, influence its public perception, and provide checks and balances. For example, in 2020 *Meta established an external Oversight Board to weigh in on specific content moderation decisions made by Meta. The purpose of the board is to promote free expression by making principled, independent decisions regarding content on Facebook and Instagram and by issuing recommendations on relevant Meta company content policy. It serves as a check against Meta’s own decision-making.
  • Privacy Policy and Data Governance: Implement strong data governance practices to ensure the responsible handling of data. To ensure compliance with applicable privacy laws and regulations to do that you would have to also implement privacy by design in order to safeguard the privacy of users from day one. Provide also users with resources to understand how their data are being used and where they are stored.  In addition, as laid out by the US’ AI Bill of Rights make sure to include consent requests for any data collection and ensure that the consent requests are brief and understandable in plain language.
  • Bias Mitigation: Address and mitigate biases in AI algorithms to avoid discriminatory outcomes and design systems in an equitable way. For example, IBM’s AI Fairness 360 toolkit (AIF360) provides a comprehensive set of algorithms and resources to detect and reduce bias in AI models, promoting fairness and inclusivity. In particular, the AIF360 is an open-source software toolkit that can help detect and remove bias in machine learning models. It enables developers to use state-of-the-art algorithms to regularly check for unwanted biases from entering their machine learning pipeline and to mitigate any biases that are discovered.
  • Transparency and Accountability: Ensure that AI systems are transparent in their decision-making processes. Provide clear explanations and insights into the factors influencing AI-driven outcomes. For example, OpenAI reminds users that: “an AI language model, I don’t have real-time access to information beyond my September 2021 knowledge cutoff.” Further, ensure the proper functioning, throughout the lifecycle, of the AI systems that you design, develop, operate or deploy, in accordance with applicable regulatory frameworks as highlighted in OECD’s AI Principles.
  • User Controls: Empower users by giving them control over their data and AI interactions. Offer opt-out options, privacy settings, and user-friendly interfaces that allow individuals to manage their AI experiences. This approach allows users to feel empowered to create an environment they feel safe in based on their own levels of privacy.
  • Learning and Development: Educate users about the capabilities and limitations of AI. Provide learning materials on how to use the technology, as well as training for new users in order to make consumers familiar with your product. For example, OpenAI’s education page provides examples of how educators are exploring how to teach and learn with tools like ChatGPT. OpenAI also educates users on the limitations of the AI system and explains that the generated content does not always represent factual accuracy.

In conclusion, Trust and Safety is crucial in the realm of AI, playing a vital role in consumer confidence and overall company reputation. The origins of Trust and Safety in the tech industry stem from the need for self-regulation and addressing privacy concerns. However, as AI advances and regulatory scrutiny intensifies, legal compliance is becoming an integral part of Trust and Safety. International laws and guidelines, such as the European Commission’s Ethics Guidelines, the EU AI Act, the EU Liability Directive, the future AI Pact, the US Government’s AI Bill of Rights, and the OECD AI Principles, offer valuable frameworks for AI companies to navigate the complex landscape of ethical and trustworthy AI practices. By prioritizing self-regulatory elements and legal requirements, companies can establish clear ethical guidelines, implement robust data governance practices, mitigate bias in algorithms, ensure transparency in decision-making, provide user controls, offer education and advisory resources, and demonstrate accountability. These strategies not only enhance consumer confidence but also differentiate AI companies and build a reputation for responsible and reliable AI applications.

RELATED ARTICLES