Kartik Hosanagar is the John C. Hower Professor of Technology and Digital Business and a professor of marketing at The Wharton School of the University of Pennsylvania. His research focuses on the digital economy and the impact of analytics and algorithms on consumers and society. Hosanagar is a 10-time recipient of MBA and undergraduate teaching excellence awards at The Wharton School. He is a serial entrepreneur who most recently founded Jumpcut Media, a startup that is using data to democratize opportunities in film and TV. Hosanagar has served as a department editor at the journal Management Science and has previously served as a senior editor at the journals Information Systems Research and MIS Quarterly.
Learn more about Hosanagar’s approach to AI via the Me, Myself, and AI podcast.
Voting History
Statement | Response |
---|---|
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Strongly disagree |
“I don’t think we have national consensus, let alone international consensus — for example, if you look at SB 1047 in California versus very few AI standards for RAI in other states. Internationally, the EU, China, and U.S. all have different requirements today. At the federal level, the U.S. has felt it’s too soon to have stringent AI standards. So we have a wait-and-watch approach in the U.S. relative to the EU, which has been quicker to regulate AI.
Some core RAI principles are emerging across different stakeholders — regulators and think tanks — like the need for explanations or bias testing for AI use in high-stakes settings. But a lot of it is still murky. The debates around SB 1047 illustrate that we have not converged yet.” |
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Agree |
“Given the potential for misuse of AI (such as using customer data to train AI without permission) and for potential for biases in AI, disclosure is important. In fact, I believe regulation should require disclosure in some settings, such as loan approvals, recruiting, and policing. In other settings, while it need not be required by regulation, it is important from a consumer trust standpoint to disclose AI use and the kinds of data used to train AI, and its purpose. The disclosures should cover the training data for models, the typical output of these models, and their purpose. Models that use customer data are an example of the kinds of models that should be disclosed.
A good case study on this is Zoom’s use of AI trained on customer conversations, the subsequent pushback from customers, and the company’s reevaluation of its AI disclosure policy.” |
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Neither agree nor disagree | “Organizations are treading cautiously and expanding risk management capabilities, especially with regard to risks tied to data privacy and confidentiality. This is due to prior waves of regulations, such as GDPR, that establish privacy best practices. However, there is a range of risks, such as model collapse due to the explosion of synthetic data or prompt injection attacks. These attacks and their implications are not well understood, never mind having well-established best practices.” |