Building Robust RAI Programs as Third-Party AI Tools Proliferate
Findings from the 2023 Responsible AI Global Executive Study and Research Project
In just a few short months since its release, OpenAI’s ChatGPT tool has catapulted the capabilities, as well as the ethical challenges and failures, of artificial intelligence into the spotlight. Countless examples have emerged of the chatbot fabricating stories, including falsely accusing a law professor of sexual harassment and implicating an Australian mayor in a fake bribery scandal, leading to the first lawsuit against an AI chatbot for defamation.1 In April, Samsung made headlines when three of its employees accidentally leaked confidential company information, including internal meeting notes and source code, by inputting it into ChatGPT.2 That news prompted many companies, such as JPMorgan and Verizon, to block access to AI chatbots from corporate systems.3 In fact, nearly half of the companies polled in a recent Bloomberg survey reported that they are actively working on policies for employee chatbot use, suggesting that a significant share of businesses were caught off guard and were unprepared for these developments.4
Indeed, the fast pace of AI advancements is making it harder to use AI responsibly and is putting pressure on responsible AI (RAI) programs to keep up. For example, companies’ growing dependence on a burgeoning supply of third-party AI tools, along with the rapid adoption of generative AI — algorithms (such as ChatGPT, Dall-E 2, and Midjourney) that use training data to generate realistic or seemingly factual text, images, or audio — is exposing them to new commercial, legal, and reputational risks that are difficult to track.5 In some cases, managers may lack any awareness about the use of such tools by employees or others in the organization — a phenomenon known as shadow AI.6 As Stanford Law CodeX fellow Riyanka Roy Choudhury puts it, “RAI frameworks were not written to deal with the sudden, unimaginable number of risks that generative AI tools are introducing.”
This trend is especially problematic for organizations with RAI programs that are primarily focused on AI tools and systems that they design and develop internally.