Responsible AI / Panelist

Riyanka Roy Choudhury

Stanford University

United States

Riyanka Roy Choudhury is an experienced responsible AI and legal technologist. She serves as a legal and policy consultant, providing guidance to startups worldwide on the implementation of responsible AI programs and legal tech in their organizations. Riyanka is a CodeX fellow at the Stanford Center for Legal Informatics, where she is actively involved in developing legal tech and AI automation applications to simplify the legal landscape. Concurrently, she engages in public speaking, writing, and contributing to the formation of an ethical AI community. Additionally, she coleads the RegTrax and Machine-Generated Legal Documents projects at Stanford.

Choudhury has received recognition for her contributions, including an award as part of the Meta Ethics in AI Research Initiative for the Asia Pacific region in 2020 and the IBM AI Ethics Award in 2021. In 2022, she was acknowledged as one of the 100 Brilliant Women in AI Ethics. Currently, Riyanka is affiliated with NUS Singapore’s Center for Technology, Robotics, AI, and the Law, where she is actively working on AI ethics with a focus on explainability and fairness, as well as AR/VR privacy.

Voting History

Statement Response
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Disagree “Based on my personal experience, I disagree with the notion that there is sufficient international alignment on RAI standards for global implementation. The regulatory landscape is fragmented, with varying regulations and cultural interpretations. While emerging themes like transparency and fairness exist, their practical application differs widely across jurisdictions. The EU AI Act and NIST guidelines often impose conflicting requirements, creating compliance challenges for multinational organizations.

Translating abstract principles into concrete policies remains difficult, especially given the diversity of cultural contexts. AI innovation continues to outpace regulatory efforts, rendering many standards outdated. Projects like the Coalition for Content Provenance and Authenticity (C2PA) could serve as models for building greater consistency in RAI principles. In this fragmented environment, global organizations face significant hurdles in implementing uniform RAI practices. Risk assessment protocols and explainable AI techniques provide valuable tools but are not complete solutions. While global uniformity is distant, the emerging convergence of standards offers a foundation for RAI implementation.”
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Agree “AI is advancing rapidly, making transparency and fair use crucial for fostering innovation and trust. Organizations should disclose AI and generative AI use in their products and services, especially in sectors like health care, workplace safety, and financial services. Laws such as California’s privacy act address consumer data privacy and AI use, aiming to protect rights and ensure responsible AI deployment. Companies must adhere to these regulations and adopt responsible AI principles. Guidelines for identifying AI-generated media are essential. Any AI-generated content that closely resembles human-created media, such as realistic photos and videos, should be disclosed and watermarked. While generative AI offers creative opportunities and democratizes content, it also raises concerns about intellectual property theft, deepfakes, and misinformation. However, universal disclosure may not always be practical, as AI is now integral to many workflows, and not all applications necessitate disclosure. Organizations must prioritize trust and ethics in AI disclosures, as these affect consumer behavior. Balancing transparency with business interests will help companies practice public trust.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Strongly disagree “The rapid expansion of AI, particularly generative AI, has exceeded the operational capabilities of most organizations. Consequently, there is a deficiency in the formulation and implementation of AI risk mitigation strategies by these organizations. A significant obstacle lies in comprehending and quantifying the potential risks associated with AI, particularly within smaller organizations. Although they are hiring staff for risk mitigation, the failure to integrate it at the design stage results in notable AI risks. It is imperative to either adapt existing AI systems or build new ones with risk management in mind.

Creating a comprehensive inventory of AI-related risks is essential. The integration of risk management into existing management cycles is pivotal. A well-defined road map must be developed to incorporate risk management measures during the initial stages of building AI. There is an increasing acknowledgment of the necessity for responsible AI and risk mitigation. RAI seeks to instill trust and standardize practices across organizations. Embracing RAI principles will assist businesses utilizing AI in understanding these risks, not just for compliance but also for strategic success.”
The management of RAI should be centralized in a specific function (versus decentralized across multiple functions and business units). Agree “An organization firstly should aim to balance the centralization of the management of RAI with a flexible adaptation across multiple functions and the needs of different business units. Secondly, negotiations between leadership and business units are bound to take place and are needed at times, but with the right functional design, organizations can align the needs of smaller business units with the vision of the leadership. Lastly, leadership should assess, analyze, and create an integrated organizational structure to manage RAI across multiple functions and business units. Hence, the organization should build a centralized repository for the better operation and management of RAI.”
Most RAI programs are unprepared to address the risks of new generative AI tools. Strongly agree “New generative AI tools, like ChatGPT, Bing Chat, Bard, and GPT-4, are creating original images, sounds, and text by using machine learning algorithms that are trained on large amounts of data. Since the RAI frameworks were not written to deal with the sudden, unimaginable number of risks that generative AI tools are introducing into society, the companies developing these tools need to take responsibility by adopting new AI ethics principles. A new AI governance system would help manage the growth of these tools and mitigate potential risks, right at the beginning of this new era of generative AI.

AI is the most powerful system of this generation to uncover complex data and processes in all the industrial sectors, and this will bring about a knowledge revolution. This revolution will simplify the functioning of existing operating systems. Responsible AI will, by design, help companies implement AI policies right at the core, while they’re building the technologies, which in turn will help control the spread of stereotypes. RAI will also help them apply mindful effort in correctly anticipating the potential repercussions of technological innovations. Building AI ethically and responsibly should be the priority of all of the companies developing, adapting, and using these new generative AI tools.”
RAI programs effectively address the risks of third-party AI tools. Strongly agree “Third-party AI tools are frameworks offered by companies or open-source communities that abstract the core inner workings of an AI model and provide APIs for developers to use and build AI applications aimed at end users.

One of the main principles and focus of RAI programs is to mitigate the risks of integrating third-party AI tools. Since the majority of the end users are using products, services, or applications that have AI at their core, it is the primary responsibility of the RAI programs to indeed take care that this AI is deployed responsibly. Building neural networks is a lengthy process, hence integrating third-party AI tools supports development and also the optimization of networks. RAI principles are built systematically to remove AI bias and to make the products more inclusive, ethical, and accountable and to maintain trust so that the final AI product works well for all of the end users.”
Executives usually think of RAI as a technology issue. Agree “Responsible AI is an ongoing process and not a one-time technical issue. There is an obligation and ownership on the executives to create trustworthy AI for their companies. When RAI principles are implemented by top-level management in companies, then executives can set the right tone at the beginning, as engineers will have to ethically embed RAI principles while building and designing the AI technologies. A lot of company executives do consider RAI a strategic and key priority, but it has to be owned by all the executives across industries. Leaders should be provided with the right working knowledge in relation to AI’s development and its use to prevent potential ethical issues. Executives should understand that along with big data, knowledge of RAI principles can help avoid reputational harm for brands and also prevent damage to society, especially when the AI will make ethical judgments in high-stakes AI applications. It’s important to go beyond the written RAI principles and implement them in the real world. Executives need to recognize that moving forward, implementation of RAI principles will create a competitive advantage for the companies with strong AI ethics.”
Mature RAI programs minimize AI system failures. Agree “Companies are investing in and relying on AI to increase efficiency in the system, so they need to understand that AI failures can affect not just individuals but millions of people. Responsible AI (RAI) programs have advanced, reliable policies for the efficient use of AI systems. This gives the users trust and confidence in these systems. However, it is also important to understand that AI algorithms and programmers who are coding have a bigger role to play in mitigating the risks of system failures. A lot of the time, the problems related to AI are also unpredictable since the neural network that is crucial to higher-level pattern recognition and aids decision-making might also break down. Therefore, in the case of accounting for such system failures, mature RAI programs and AI explainability provide a path to detect and prevent such issues in current and future systems as well.”
RAI constrains AI-related innovation. Strongly disagree “Responsible AI creates a guideline to mitigate AI risks. Responsible AI influences innovations to create a paradigm shift in AI algorithms to reduce the effect of replication of human decision-making in its machine learning at the core level. It results in growth and brings in social responsibilities for organizations in AI-related innovations. In my experience, I have seen that conventional AI methods overlook the need for complete information to deal with complexity that responsible AI accounts for in its guidelines. Since it plays a constructive role for AI to reach its full potential, responsible AI frameworks and systematic approaches will help machine learning to reach its full potential in future innovations. Backed by empirical studies, research organizations have been able to identify and describe key areas of responsible AI as a set of fundamental principles that are important when developing AI innovations. I strongly believe that when adopted universally, responsible AI can open up new opportunities leading to fairer and more sustainable AI.”
Responsible AI should be a part of the top management agenda. Strongly agree “Responsible AI is an economic and social imperative. It’s vital that AI is able to explain the decisions it makes, so it is important for companies to include responsible AI as a part of the top management agenda. Having a regular agenda item for leadership meetings will ensure fair AI throughout the supply chain. This will make it easier for the businesses creating or dealing with AI technologies to tackle the unique governance challenges that AI creates — for example, privacy issues, and managing the data and complexity that ML/computational systems can bring in, like bias and discrimination.”