Responsible AI / Panelist

Andrew Strait

Ada Lovelace Institute

United Kingdom

Andrew Strait is an associate director at the Ada Lovelace Institute, responsible for its work addressing emerging technologies and industry practices. He has spent the past decade working at the intersection of technology, law, and society. Before joining Ada, he was an ethics and policy researcher at Google DeepMind, where he managed internal AI ethics initiatives and oversaw the company’s network of external partnerships. Previously, he worked as a legal operations specialist at Google, where he developed and implemented platform moderation policies for areas such as data protection, hate speech, terrorist content, and child safety.

Voting History

Statement Response
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Agree “Major organizations are already undertaking compliance studies and initial efforts to meet these criteria. Smaller organizations are the ones that may experience the most pain initially, but the European AI Office has been clear that it will take an approach akin to GDPR compliance, where it provided a grace period before enacting fines and taking enforcement action.

The challenge will be how much guidance the nascent AI Office, which is only just starting to hire up, can produce in the next 12 months. They are already getting to work on developing guidelines for GPAI providers.

Another aspect is that many of the requirements put in place will match what companies following JTC 21 and ISO SC42 requirements are already asking. There is a standards-compliance service industry already developing, and vendors are already selling their tools and services into companies seeking compliance.

It will take time to comply and to view this as a normal process of developing or deploying AI systems — but with the right guidance from the AI Office, it is imaginable that all organizations, big and small, will be compliant within the next one to two years.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Strongly disagree “I disagree with this question for a few reasons.

1. There are no clear and consistent AI risk management practices. Recent research into AI auditing tools, for example, shows that a rise in AI auditing tools still hasn’t led to them being effectively used by organizations to meet accountability goals. These tools fail to enable meaningful action to be taken once an audit has been completed. Other research has shown that auditors experience a range of issues with audits that prevent effective accountability. We are still in an era of testing and trialing different methods — but they are not proven to be effective.

2. Not all organizations are adopting these practices to address AI-related risks. These are still being done ad hoc and mostly by organizations that are well resourced to do this work. The lack of regulatory requirements to adopt risk management practices creates a perverse incentive — in other words, these are a nice-to-have cost.

3. Even for organizations that are adopting these methods, it is unclear whether they are achieving meaningful risk reduction. More transparency and research are needed to determine this.”