Stefaan G. Verhulst, Ph.D., focuses on using advances in science and technology, including data and AI, to improve decision-making and problem-solving. He has cofounded several research organizations, including the Governance Laboratory (GovLab) at New York University and The Data Tank in Brussels. He is also editor of the open-access journal Data & Policy and has served as a member of several expert groups on data and technology, including the European Commission’s expert group on business-to-government data sharing and its high-level expert group on using private-sector data for official statistics. He is the author of several books and has been invited to speak at international conferences, including TED and the United Nations World Data Forum.
Voting History
Statement | Response |
---|---|
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Disagree |
“GovLab recently conducted a global benchmarking study on data governance, and, given that data governance forms the foundation of AI governance, we can extend our conclusion to RAI requirements: There remains significant fragmentation in the principles, standards, and governance mechanisms related to both data and AI. This fragmentation exists not only within countries and regions but also across sectors and organizations, resulting in a patchwork of policies and frameworks. This disjointed landscape presents challenges to the development of a more unified approach to AI governance.
The dynamic nature of the ecosystem further exacerbates this fragmentation as new, often technical solutions are frequently introduced to address evolving challenges. These newer technologies often necessitate ad hoc extensions to existing governance frameworks, many of which were not designed to accommodate the complexities of modern data realities.” |
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Agree | “As a best practice, companies should not only disclose the use of AI in their operations but also detail how they will manage and protect the data generated and collected by these AI applications. To avoid the pitfalls of vague privacy statements, disclosures should go beyond merely being included in privacy policies. They should be user-friendly and visually accessible to ensure comprehension. Effective disclosures should clearly communicate the purpose behind the use of AI and how AI systems function, including aspects of data collection, processing, and third-party involvement, to set accurate consumer expectations. Moreover, meaningful disclosures must be paired with genuine user engagement, securing a social license for the use of AI involving personal data.” |
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Disagree |
“Most companies within the European Union face significant challenges in preparing to effectively apply artificial intelligence in a manner that adds true value to their organization and to society and is fit for purpose. While sectors such as finance and health care have made more progress, many other sectors lag due to varying capabilities and resource availability. Small and medium-size enterprises, in particular, struggle with the computational and data demands and the high costs associated with implementing AI technologies. This uneven readiness is exacerbated by a lack of in-house expertise and difficulties in attracting skilled data and AI talent, which are crucial for developing and managing AI-driven projects. Consequently, these companies may find themselves unprepared not just in adopting AI but also in complying with the AI Act.
Those companies that are already behind in their AI journeys may find it daunting to navigate these new regulations. Understanding and complying with such a framework that is massively complicated requires not only legal and ethical expertise but also the ability to integrate these considerations into the AI systems themselves.” |
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree |
“To mitigate AI risks, organizations have started to enhance cybersecurity, update AI systems, and adopt adversarial training. Some are now also conducting bias audits, utilizing diverse data sets, and engaging multidisciplinary teams for more ethical AI. Additionally, the development of explainable AI and the integration of human oversight into AI systems are increasingly being considered to ensure transparency and dependability.
However, despite these efforts, a critical AI risk that is mostly overlooked or neglected is the need to secure a social license for AI usage. This involves aligning AI applications with societal values and expectations, particularly when repurposing data initially collected for different AI uses. The absence of such consideration can lead to public discontent and erode trust, undermining an organization’s broader social license to operate. Increasing participatory engagement with various stakeholders, including employees, customers, and experts, is essential to navigating these challenges effectively.” |