Responsible AI / Panelist

Douglas Hamilton

Nasdaq

United States

Douglas Hamilton heads AI research and engineering at Nasdaq. He joined Nasdaq in 2017 as a data scientist and has developed solutions leveraging rapid adaptation, reinforcement learning, and efficient market principles as solutions to predictive control problems. Hamilton’s notable achievements at Nasdaq include establishing the first AI-powered strike listing system, bringing generative AI to board governance, publishing over a dozen patents, and launching the first AI-powered order type approved by the Securities and Exchange Commission. Previously, Hamilton was the lead data scientist for an advanced manufacturing analytics group at Boeing Commercial Airplanes and built customer relationship management systems at Fast Enterprises. He is a U.S. Air Force veteran and sits on the advisory board of The Data Science Conference. Hamilton has a Master of Science in Systems Engineering from MIT and a Bachelor of Arts in mathematics from the University of Illinois at Springfield.

Voting History

Statement Response
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Neither agree nor disagree “In industry and among practitioners, a statistical approach to responsible AI has emerged and matured considerably over the past five years or so. This approach, which covers items from bias mitigation to risk amelioration, is largely reflected in the voluntary NIST guidance on RAI. The guidance from NIST helps harmonize requirements and expectations across multiple stakeholders — clients, corporate governance, technologists, and regulators — while leaving room for variation to both innovate and address the specific idiosyncratic needs of different AI systems.

Alignment gets more thorny in two areas. At the municipal level, states and provinces are pursuing varied approaches that often exceed their typical jurisdictional reach, impinging innovation in the space. Likewise, many international bodies — the EU, especially — are attempting to front-run best practices and the technology with nonsensical rules that greatly overreach their geographic borders. This behavior is especially deleterious to innovation and policy harmonization, as well as generally accepted principals of sovereignty.”
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Neither agree nor disagree “Public markets today are required to disclose when they make changes to their market functionality and do disclose when AI is rolled out. While transparency is generally desirable, and companies likely ought to disclose when it’s in their products, mandating such disclosure poses challenges:

1. No good definition exists today or has ever really existed that differentiates AI from software or other decision-making systems, so clarifying when disclosure is required may be difficult.

2. While consumers may, at times, want or need to know how products work, it’s not clear what, if anything, a lay consumer would do with that information, as investigating AI systems remains highly technical.

3. More sophisticated users can quickly figure out whether AI is being employed, and business users have enterprise governance standards to understand their exposure.

There may be a need to mandate disclosure in some critical industries, like medicine, but mandates vis-à-vis AI should be taken up carefully, lest we march down the road to technological irrelevancy that the EU has taken.”
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Agree “While the EU’s AI Act is allegedly sweeping, the actual impact is likely to be less so. I'm frequently reminded of both the hype and concern in the run-up to GDPR. Of course, after its implementation, data privacy was solved and AI in Europe flourished ... except none of that is true. GDPR became little more than a checkbox to firms, and banners on web pages, while providing opportunities for noncontinental AI and data firms. I look forward to seeing what new commercial opportunities will flow from the EU to the U.S. as a result of the AI Act.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Neither agree nor disagree “Corporations certainly are taking seriously the fact that AI adoption poses risks. Companies are putting together governance and ethics boards, engaging with regulators, and reacting to news about various AI mishaps. While they are expanding their risk functions and purview to engage with AI, they are not doing so in a way that considers the unique and novel types of risks and benefits AI provides. Risk organizations and advocacy groups are still dominated by attorneys, with very few technical experts in the room who can weigh in on the appropriate value-variation calculus businesses need to consider.”