Responsible AI / Panelist

Sanjay Sarma

Massachusetts Institute of Technology

United States

Sanjay Sarma is president and CEO of the Asia School of Business in Kuala Lumpur, Malaysia. He is also a professor of mechanical engineering at MIT, where he was previously vice president for open learning, overseeing initiatives such as OpenCourseWare, MITx, and the Jameel World Education Lab. He cofounded the Auto-ID Center at MIT, where he developed several key technologies behind the Electronic Product Code RFID standards, and was the founder and CTO of OATSystems. He serves on the boards of Rekor, Aclara Resources, GS1 US, and several startups. Sarma has a bachelor’s degree from the Indian Institute of Technology, a master’s degree from Carnegie Mellon, and a doctorate from the University of California, Berkeley.

Voting History

Statement Response
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Disagree “The standards are all over the place. The EU has an enacted law now. The U.S. has guidelines. The largest population in the world, India, does not have a law. Precisely how to implement AI responsibly is definitely up in the air.”
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Disagree “There are so many ways AI can be used: internally, in the product, to deliver the product, for example. If a product is using AI directly, such as in self-driving, then yes. Or if a medical diagnostic uses AI rather than a definitive measure, then perhaps yes. But if an online tool uses AI to suggest other products, then perhaps not. I am generally hesitant to mandate declarations if the specific harms that are intended to be addressed are not explicitly listed.”
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Disagree “The regulations are different in Europe from the rest of the world. It’s a bit of a Wild West. The uptake is spotty. So companies will likely work elsewhere rather than comply until the situation becomes more clear.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Strongly disagree “I am finding that most organizations are confused about AI. Is it an opportunity? Is it a risk? If it is a risk, is it a competitive risk? Is it an existential risk? Is there a cybersecurity risk from AI? Is there a risk from our employees using it? (Should we let them?) Is there a risk from our vendors using it? For example, what if they generate code using AI — is it bulletproof? How about the risk from our experiments with AI use in a chatbot, say? Does the potential upside create a downside risk of the chatbot going rogue? See, for example, the article “Air Canada Ordered to Pay Refund Its AI Chatbot Mistakenly Offered a Customer.” The massive range of risks seems to be leading to analysis paralysis. Where to start? How to start? What are the low-hanging fruits?

All this means that AI risk has not been properly assessed. A good measure of risk is when it shows up in a risk and audit heat map (impact versus likelihood). I am not seeing that happen. Currently, companies have not successfully captured the risk landscape to put it in such a heat map.”