Responsible AI / Panelist

Jeff Easley

Responsible AI Institute

United States

Jeff Easley is general manager of the Responsible AI Institute, where he helps organizations harness artificial intelligence in a safe, trustworthy, and sustainable way. He was previously with Goldman Sachs, where he led the development of a SaaS-based retail brokerage platform designed to be embedded within one of the world’s largest technology companies. Before that, he drove transformative governance, risk, and compliance programs and created digital products at USAA. He also cofounded USAA Labs, an enterprisewide product innovation organization, and led the RegTech Initiative, spearheading the use of machine learning and natural language processing to solve risk and compliance challenges.

Voting History

Statement Response
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Disagree “Several factors contribute to this lack of alignment:

Differing national priorities and regulations: Different countries have different priorities and approaches to AI regulation. Some are more focused on privacy (like GDPR in the EU), while others prioritize innovation or national security. This makes it challenging for global companies to implement a uniform RAI strategy.

Varying definitions of RAI: There isn’t a universally agreed-upon definition of what constitutes responsible AI. Different organizations and countries have their own interpretations, which can lead to confusion and inconsistency.

The evolving nature of AI: AI is a rapidly evolving field, and new ethical and responsibility challenges emerge constantly. Regulations and standards often struggle to keep up with these changes.

The lack of enforceable global standards: While there are international guidelines like the OECD AI Principles or the EU’s Ethics Guidelines for Trustworthy AI, these are not legally binding. Without enforceable global standards, it’s difficult to ensure consistent implementation.”
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Agree “Customers have a right to know when AI is being used when it affects their privacy, data security, or personal experiences. Informed consent is crucial, particularly in sectors like health care, finance, and hiring, where AI can have significant impacts on individuals’ lives. Further, requiring disclosures in specific scenarios will serve to hold companies accountable for the ethical use of AI. This can encourage responsible AI development and deployment, mitigating risks such as bias, discrimination, and unintended consequences.”