Responsible AI / Panelist

Elizabeth Anne Watkins

Intel Labs

U.S.

Elizabeth Anne Watkins is a research scientist at Intel Labs and a member of Intel’s Responsible AI Advisory Council, where she applies social science methods to amplify human potential in human-AI collaboration. Her research on the design, deployment, and governance of AI tools has been published in leading academic journals and featured in Wired, MIT Technology Review, and Harvard Business Review. She has worked, consulted, and collaborated with research centers across academia and industry, including Harvard Business School and Google, and is an affiliate of the Data & Society Research Institute. Watkins was previously a postdoctoral fellow at Princeton and has a doctorate from Columbia University and a master’s degree from MIT.

Voting History

Statement Response
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Agree “At Intel, we’ve long looked toward the horizon to both anticipate and shape the future of AI. Part of that work is to navigate the shifting terrain of changing regulation, so our preparations for the EU AI Act have been well underway. We have found that the EU AI Act goal of protecting fundamental human rights resonates well with our long-standing responsible AI principles and global human rights principles, while the goal of environmental sustainability aligns well with our value to protect the environment. Overall, the EU AI Act provides lenses through which to operationalize our values, while the act’s transparency requirement for AI models facilitates better risk assessment and mitigation from an AI supply chain perspective. We welcome the opportunity for the industry as a whole to move forward in advancing AI technologies.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Neither agree nor disagree “Given the broad nature of this question, it’s difficult to either agree or disagree, though there are elements of progress as well as gaps across the ecosystem when it comes to AI risk management capabilities. At Intel, we continue to evolve our responsible AI analysis process to assess potential risks related to product misuse, algorithmic bias, algorithmic transparency, privacy infringement, limits on freedom of expression, and health and safety risks so that our RAI Council can work with developers to build risk-mitigation strategies. We’ve been heartened to see other efforts, in both industry and regulatory spaces, such as the NIST AI Risk Management Framework and the EU AI Act, likewise utilize a risk-based approach to ensure that AI tools can enable human flourishing amid a rapidly changing technology landscape.

That being said, one area of risk that we have newly integrated into our responsible AI principles is sustainability. While we are developing mitigation methods, addressing environmental impacts is still not part of the broader conversations taking place, and the tools are still nascent, so there is still a long way to go.”
As the business community becomes more aware of AI’s risks, companies are making adequate investments in RAI. Disagree “Although it’s difficult to generalize across all companies, there is always room for improvement in RAI practices, so I somewhat disagree that the business community is adequately invested. With the recent advent of generative AI, the potential benefits of these systems will grow, but it will take robust RAI programs to build systems that reduce their possible risks in order to truly amplify human potential. While we’ve seen a number of our industry peers also taking meaningful steps in the right direction, this is an effort that will take the industry as a whole to truly move the needle.”
The management of RAI should be centralized in a specific function (versus decentralized across multiple functions and business units). Agree “All organizations function slightly differently, and what works for one might not work for another. At Intel, our centralized, multidisciplinary Responsible AI Advisory Council is responsible for conducting a rigorous review throughout the life cycle of an AI project. The goal is to assess potential ethical risks within AI projects and mitigate those risks as early as possible. Members of our RAI Council provide training, feedback, and support to the development teams and business units to ensure consistency and compliance with our principles across Intel. To foster durable RAI cultures, it’s also helpful to complement this central team with a strong network of champions who can advocate for RAI principles within teams and business units.”
Most RAI programs are unprepared to address the risks of new generative AI tools. Agree “While generative AI tools are exciting systems that can make us more productive, they also raise concerns about impacts on the workforce and professions, toxicity, and bias, as well as concerns about labor sourcing for data labeling, and resource demands. Transparency and explainability — that is, ensuring that stakeholders understand how a system has been built, how it generates outputs, and how their inputs lead to outputs — have been shown to be top concerns for generative AI systems. We cannot trust generative AI results without understanding the processes by which these systems work. As generative AI evolves, it is critical that humans remain at the center of this work and that organizations support the humans doing this work.

Responsible AI begins with the design and development of systems. Organizational leadership must build robust infrastructure for both anticipating and addressing the risks of AI tools; bringing together multiple perspectives, backgrounds, and areas of expertise into spaces of shared deliberation; and ensuring close collaboration with development teams throughout the AI system development life cycle. Only then will we be equipped to build systems that can truly support and amplify human potential.”