MIT Sloan Management Review and Boston Consulting Group assembled an international panel of more than 20 industry practitioners, academics, researchers, and policy makers to share their views on core issues pertaining to responsible AI. Over the course of five months, we will ask the panelists to answer a question about responsible AI and briefly explain their response. Readers can see all panelist responses and comments in the panel at the bottom of each article and continue the discussion in AI for Leaders, a LinkedIn community designed to foster conversation among like-minded technology experts and leaders.
About the Research
In the spring of 2023, MIT Sloan Management Review and Boston Consulting Group fielded a global executive survey to learn the degree to which organizations are addressing responsible AI.
We focused our analysis on 1,240 respondents representing organizations reporting at least $100 million in annual revenues. These respondents represented companies in 59 industries and 87 countries. Among these respondents are responses yielded from surveys fielded separately in Africa, as well as a localized version in China. The Africa survey yielded 77 responses and the China survey 201.
We defined responsible AI as “a framework with principles, policies, tools, and processes to ensure that AI systems are developed and operated in the service of good for individuals and society while still achieving transformative business impact.”
To quantify what it means to be a responsible AI Leader, the research team conducted a cluster analysis on three numerically encoded survey questions: “What does your organization consider part of its responsible AI program? (Select all that apply.)”; “To what extent are the policies, processes, and/or approaches indicated in the previous question implemented and adopted across your organization?”; and “Which of the following considerations do you personally regard as part of responsible AI? (Select all that apply.).” The first and third questions were first recategorized into six options each to ensure equal weighting of both organizational and personal perspectives. The team then used an unsupervised machine learning algorithm (K-means clustering) to identify naturally occurring clusters based on the scale and scope of the organization’s RAI implementation. The K-means algorithm required specification of the number of clusters (K), which were verified through exploratory analysis of the survey data and direct visualization of the clusters via UMAP. We then defined an RAI Leader as the most mature of three maturity clusters identified through this analysis, based on the scale and scope of the organization’s RAI implementation. Scale is defined as the degree to which RAI efforts are deployed across the enterprise (such as ad hoc, partial, or enterprisewide). Scope includes the elements that are part of the RAI program (such as principles, policies, or governance) and the dimensions covered by the RAI program (such as fairness, safety, and environmental impact). Leaders were the most mature in terms of both scale and scope.
Additionally, the team completed three qualitative interviews with industry thought leaders and assembled a panel of 22 RAI thought leaders from industry, policy development, and academia, who were polled on key questions to inform this research multiple times through its cycle.
About the Authors
David Kiron is an editorial director at MIT Sloan Management Review and coauthor of the book Workforce Ecosystems: Reaching Strategic Goals With People, Partners, and Technology (MIT Press, 2023). Steven Mills is a managing director and partner at Boston Consulting Group, where he serves as the chief AI ethics officer.