Responsible AI / Panelist

Ryan Carrier

ForHumanity

United States

Ryan Carrier founded ForHumanity after a 25 year career in finance. He launched the nonprofit organization with the goal of using the independent audit of corporate AI systems as a means of mitigating the risk associated with artificial intelligence. As ForHumanity’s executive director and board chair, he is responsible for the of the organization’s day-to-day operations and the overall process of independent auditing. Previously, he owned and operated Nautical Capital, a quantitative hedge fund that used AI algorithms. Before that, he was director of Macquarie Bank’s Investor Products business and was vice president of Standard & Poor’s Index Services business. He has a bachelor’s degree in economics from the University of Michigan.

Voting History

Statement Response
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Disagree “An organization would have to read between the lines of European regulation and the light-touch approach currently being applied in the United States, Canada, the U.K., Australia, and India to see a baseline of regulatory infrastructure that is common — however, it can be inferred. Especially in light of the FTC’s recent publication on unfair, deceptive, and abusive practices, there is commonality that AI:

1. Should not be false, misleading, or exaggerated.
2. Should have robust and meaningful data management techniques using relevant, representative data.
3. Should not be allowed to deviate from its intended purpose with model, data, or concept drift.
4. Should be transparent in its usage and data practices.
5. Should not be allowed to forgo intentional efforts to mitigate bias in data, architectural inputs, and outcomes.
6. Should not be using detrimental nudges and deceptive design.

Seen through that lens, there is a common core of responsible AI infrastructure, including expert and robust governance/oversight/accountability, risk management, data governance, human oversight, monitoring, and ethical oversight, with transparency and disclosure requirements.”
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Strongly agree “AI, algorithmic, and autonomous systems are sociotechnical and often present risk to all AI subjects through their use or impactful decisions/determinations. AI subjects, like persons who take approved drugs, have a right to know the risks associated with use of the tool (just like side effects of a drug). Informed users are better users. They can take action to mitigate risks from a tool, such as with an LLM, where an informed user, aware of potential hallucinations and made-up sources, can check the outputs before using them. Transparency and disclosure need not cover intellectual property and/or trade secrets. They should provide the user with the necessary information to be an informed user. Disclosure of interaction with an AI agent should also be required. Data sets and data management and governance practices should be disclosed (through data transparency documents/model cards), and a users’ guide should be provided. And, of course, the most important part is disclosure of all residual risks.”
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Strongly disagree “The requirements of the EU AI Act are extremely dense. For existing high-risk AIs, it requires a substantial amount of backfilling on the Article 10 data management and governance front. The risk management process requires substantial expertise. The issues around ethics are enormous, and there are simply not enough qualified individuals in the world to execute Fundamental Rights Impact Assessments and the remaining 40-plus assessments that ForHumanity has identified as required under the act. This doesn’t even include the integration of quality management, technical documentation, and robust compliance and monitoring programs. The challenge of integration and coordination with top management and oversight bodies is likely to be haphazard and disjointed.

A good comparison might be GDPR, where we are nearly six years in and many organizations remain insufficiently compliant, especially in regard to technical and organizational controls. The same will occur with the EU AI Act: Organizations will be insufficiently compliant and allowed to operate, treading the fine line between enforcement and undiscovered noncompliance. CEN-CENELEC standards under JTC 21 won’t be available before 2025 at the earliest.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree “The tech industry has not established risk management by design and has a 40-plus-year history to overcome. It is the only major economic sector that remains largely unregulated. As a result, we see little governance, oversight, and accountability in the sector. Individual players pay lip service to the idea of risk management and safety, and they actively operate to subvert measures such as policy and standards (areas where they can do so quietly). Overtly, large tech players have fired entire teams associated with risk management and responsible AI. Lastly, there is a complete unwillingness to engage with civil society and external programs of governance, oversight, and accountability. These organizations determine their own “responsible AI practice,” eschewing global best practices. There is a failure to include diverse input and multi-stakeholder feedback in the risk management process, and the result is limited perspectives on risk identification and a failure to disclose residual risk. Until society sees public disclosures of residual risk associated with AI tools, we will know that companies are not taking risk seriously, because they are making limited efforts to protect against liability.”