Responsible AI / Panelist

Simon Chesterman

National University of Singapore

Singapore

Simon Chesterman is the David Marshall Professor and vice provost at the National University of Singapore and senior director of AI governance at AI Singapore. He is also editor of the Asian Journal of International Law and copresident of the Law Schools Global League. Previously, he was global professor and director of the New York University School of Law’s Singapore program, a senior associate at the International Peace Academy, and director of U.N. relations at the International Crisis Group. Chesterman has taught at several universities and is the author or editor of 21 books.

Voting History

Statement Response
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Strongly disagree “There are now well over a hundred standards that have been adopted by, among others, the International Telecommunication Union (ITU), the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the Institute of Electrical and Electronics Engineers (IEEE).

These are worthy initiatives, but there is no common language across the bodies, and many terms routinely used with respect to AI — fairness, safety, transparency, to pick some obvious ones — lack agreed-upon definitions. Some standards have been adopted for narrow technical or internal validation purposes; others aim to incorporate broader ethical principles.

In such an environment, there is a real risk that organizations will simply pick the standards they like and follow them.”
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Strongly agree “Transparency is a means, not an end. Its purpose is to enable informed decision-making and risk assessment. In many situations, users want to know whether they are dealing with a person or a bot, or if a picture was drawn by human hand or an algorithmic pattern. Transparency also builds trust. That is routinely acknowledged to be one of the major barriers to adoption and acceptance of new technologies in general and AI in particular.

At present this is, for the most part, a simple yes-or-no question. Increasingly, however, AI-assisted decision-making will blend human and machine. Some chatbots start on automatic responses (human out of the loop) for basic queries, moving through suggested responses that are vetted by a human (over the loop), escalating up to direct contact with a person (in the loop) for unusual or more complex interactions. Though decisions based solely on automated processing are going to increase, machine-assisted decisions will skyrocket. Much as passengers in autonomous vehicles need clarity as to who is meant to be holding the wheel, humans interacting with AI systems should be aware of with whom or with what they are dealing.”
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Disagree “Nothing focuses the mind so much as the prospect of being hanged in the morning. With the prospect of significant penalties, most organizations are gearing up for the entry into force of the EU’s grand experiment in regulating AI. The problem is that there remains some uncertainty about precisely how wide and how deep these requirements might be. In particular, many organizations might not think that they are “AI companies” and yet are using technology that will be covered by the act. Established players with compliance teams should be fine, but European small and medium-size enterprises and smaller international firms doing business in the EU may have a nasty surprise if they discover they are covered. It will be significantly messier than the entry into force of the GDPR, which at least related to a reasonably well-defined set of activities.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree “The proliferation of discussion, legislation, and new institutes focused on AI safety speaks to the seriousness of purpose that has arrived in balancing desire for AI’s benefits against mitigating or minimizing its risks. Yet there is an inverse relationship between those with leverage and those with interest. The most enthusiastic proponents of serious measures to address AI risk are those furthest from its economic advantages. For companies, the fear of missing out often dominates. Until there is a major incident, they will continue to find it hard to price in the costs of AI fallibility.”
As the business community becomes more aware of AI’s risks, companies are making adequate investments in RAI. Disagree “The gold rush around generative AI has led to a downsizing of safety and security teams in tech companies, and a shortened path to market for new products. This is driven primarily by the perceived benefits of AI, but risks are not hard to see. In the absence of certainty about who will bear the costs of those risks, fear of missing out is triumphing — in many organizations, if not all — over risk management. A key question for AI governance and ethics in the coming years is going to be structural: Where in the organization is AI risk assessed? If it is left to IT or, worse, marketing, it will be hard to justify investments in RAI. I suspect it will take a few major scandals to drive a realignment, analogous to some of the big data breaches that elevated data protection from “nice to have” to “need to have.””
The management of RAI should be centralized in a specific function (versus decentralized across multiple functions and business units). Neither agree nor disagree “Of course, it depends. AI is increasingly going to be deployed across entire business ecosystems. Rather than being confined to an IT department, it will be more like finance: Though many organizations have chief financial officers, responsibility for financial accountability isn't limited to him or her. Strategic direction and leadership may reside in the C-suite, but operationalizing RAI will depend on those deploying AI solutions to ensure appropriate levels of human control and transparency so that true responsibility is even possible.”
Most RAI programs are unprepared to address the risks of new generative AI tools. Strongly disagree “Any RAI program that is unable to adapt to changing technologies wasn’t fit for purpose to begin with. The ethics and laws that underpin responsible AI should be, as far as possible, future-proof — able to accommodate changing tech and use cases. Moreover, generative AI itself isn’t the problem; it’s the purposes for which it is deployed that might cross those ethical or legal lines.”
RAI programs effectively address the risks of third-party AI tools. Strongly disagree “We’re still at the early stages of AI adoption, but one of the biggest problems is that we don’t know what we don’t know. The opacity of machine learning systems in particular makes governance of those black boxes challenging for anyone. That can be exacerbated by the plug-and-play attitude adopted with respect to many third-party tools.”
Executives usually think of RAI as a technology issue. Agree “Responsible AI is presently seen by many as “nice to have.” Yet, like corporate social responsibility, sustainability, and respect for privacy, RAI is on track to move from being something for IT departments or communications to worry about to being a bottom-line consideration — a “need to have.””
Mature RAI programs minimize AI system failures. Strongly agree “RAI focuses more on what AI should do rather than what it can do. But if an organization is intentional about its use of AI systems, adoption of human-centered design principles, and testing to ensure that those systems do what they are supposed to, the overall use of AI by that organization is going to be more effective as well as more legitimate.”
RAI constrains AI-related innovation. Disagree AI is such a broad term that requirements that it be used “responsibly” will have minimal impact on how the fundamental technology is developed. The purpose of RAI is to reap the benefits of AI while minimizing or mitigating the risk — designing, developing, and deploying AI in a manner that helps rather than harms humans. Arguments that this constrains innovation are analogous to saying that bans on cloning humans or editing their DNA constrain genetics.”
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Strongly disagree “One of the longstanding concerns about corporate social responsibility was that it would locate questions of accountability in the marketing department rather than the legal or compliance department. Over the years, CSR has become a more serious enterprise, with meaningful reporting and targets. We now see larger ESG obligations and “triple bottom line” reporting. But all this is distinct from responsible AI. There may be overlaps, but responsible AI involves narrower targets to develop and deploy AI in a manner that benefits humanity. A particular challenge is the many unknown unknowns in AI, meaning that what is responsible may sometimes involve conditions of uncertainty and self-restraint rather than adhering to externally set metrics.”
Responsible AI should be a part of the top management agenda. Agree “Not every industry will be transformed by AI. But most will be. Ensuring that the benefits of AI outweigh the costs requires a mix of formal and informal regulation, top-down as well as bottom-up. Governments will be a source of regulations with teeth. As industries have discovered in the context of data protection, however, the market can also punish failures to manage technology appropriately.”