Linda Leopold is head of AI strategy at H&M Group and has been part of the global fashion retailer’s AI journey since 2018, spearheading its work in responsible AI and digital ethics. Before joining H&M Group, she spent many years working in the media industry. Leopold was previously editor in chief at the critically acclaimed fashion and culture magazine Bon and is the author of two nonfiction books. She has been a columnist for Scandinavia’s biggest financial newspaper and has worked as an innovation strategist at the intersection of fashion and tech.
Voting History
Statement | Response |
---|---|
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Neither agree nor disagree |
“There is no shortage of AI principles and frameworks globally. These range from large international initiatives like the G7 Hiroshima AI Process and the Global Partnership on Artificial Intelligence to AI risk management frameworks like NIST’s, as well as AI governance standards from ISO and IEEE, and existing and emerging regulations. While the focus may differ between frameworks (with some primarily targeting developers of advanced AI systems), there are recurring overarching themes, such as fairness, accountability, transparency, privacy, safety, and robustness. Will these frameworks help with the effective implementation of responsible AI requirements across organizations? Yes. However, the use of AI can vary significantly between industries, and what seems to be lacking at the moment are industry-specific frameworks and governance efforts.
In my experience, principles and standards are just one factor in successfully implementing responsible AI within a company. There are many other important aspects, such as culture and communication, general governance structures, and, importantly, cross-functional collaboration. Responsible AI is a multidisciplinary area and truly a team effort.” |
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Agree |
“Transparency is a key aspect of responsible AI practices, meaning AI use should be explainable and communicated clearly. This is an ethical obligation toward customers, enabling informed decisions and enhancing trust. In many contexts, disclosing AI use is becoming common practice (for example, social media platforms labeling images as AI-generated). There are also legal obligations, such as the EU AI Act’s transparency requirements.
Some examples of when disclosure is warranted: Customers should know when they are interacting with an AI system rather than a human or when an AI system is making important decisions that affect them. They should be made aware when content they consume — texts, videos, and images — is entirely AI-generated. However, there are gray zones. Ultimately, it depends on the extent of AI usage and the level of human involvement, as well as the potential harm or erosion of trust that nondisclosure could cause. This is a rapidly evolving area, and we can expect new best practices and regulatory requirements to emerge as technology advances.” |
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Agree | “The regulation has been in the making for quite some time, so organizations should be somewhat prepared — and if they approach compliance proactively, even more so (although the level of readiness most likely differs between organizations, depending on their industry, size, and AI application areas). This means starting now, if you haven’t already! A first, crucial step is to map and understand your internal landscape of AI systems and how they fit into the EU AI Act’s risk levels. You should also identify and start engaging with the right internal stakeholders — to raise awareness about what the act will require from the organization. Engagement and collaboration with stakeholders will be key to success.” |
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree |
“I believe many organizations are indeed expanding their risk management capabilities. But is it sufficient? And does it happen fast enough? Probably not. New risks keep emerging as technology and its areas of application evolve. (A concrete example is generative AI hallucinations, which weren’t a common problem a year or so ago. Now they are.) The usage of AI across organizations is also likely increasing steadily, with new groups of users. All this means that risk management capabilities need to keep pace both with the speed of tech development and the spread of usage.
There is also a broad spectrum of risks to address. Risks could be ethical, legal, or related to cybersecurity. Having full insight and a comprehensive approach to risk management requires maturity, which few organizations have. (For example, in a 2022 study, only 6% of surveyed companies had a robust responsible AI foundation in place.) And even for organizations that do have a solid responsible AI program in place, keeping up with the speed of development and continuously addressing new risks requires effort.” |
As the business community becomes more aware of AI’s risks, companies are making adequate investments in RAI. Neither agree nor disagree |
“In the past year, developments in generative AI have definitely put a new spotlight on AI — and the associated risks. If responsible AI previously was considered “important but not urgent” by companies’ top management, by now it should have moved to “important and urgent” on the prioritization list. AI-related risks are impossible to ignore.
Generative AI leads to new types of ethical, legal, and security risks. Due to the availability of generative AI tools and a large range of possible applications, it also significantly increases the size of the audience that responsible AI programs have to reach. The need for investments in responsible AI might look different between companies, depending on what the current setup looks like. But for most companies, further investments will most likely be needed. Whether all of this actually will lead to more investments in responsible AI is yet to be seen. I believe there might be a delay here. It will be interesting to revisit this question in a year.” |
The management of RAI should be centralized in a specific function (versus decentralized across multiple functions and business units). Agree |
“The multidisciplinary nature of responsible AI requires involvement by, and collaboration between, many units across an organization. For example, areas as diverse as digital ethics, data privacy, data science, legal, corporate governance, cybersecurity, diversity and inclusion, and sustainability all contribute to both the definition and implementation of responsible AI.
That said, someone needs to be in charge of the RAI strategy and vision — providing topic expertise, leadership, and coordination. This should preferably be handled by a specific, centralized team. Operationalization of responsible AI will benefit from being as decentralized as possible — for example, by having RAI “champions” or similar across functions or business units. The message to emphasize is that it is everyone’s responsibility to be responsible.” |
Most RAI programs are unprepared to address the risks of new generative AI tools. Neither agree nor disagree |
“What we have seen lately is rapid technological development and new, powerful tools released with hardly any prior public debate about the risks, societal implications, and new ethical challenges that arise. We all need to figure this out as we go. In that sense, I believe most responsible AI programs are unprepared. Many new generative AI tools are also publicly available and have a large range of possible applications. This means RAI programs might need to reach a new and much broader audience.
With that said, if you have a strong foundation in your responsible AI program, you should be somewhat prepared. The same ethical principles would still be applicable, even if they have to be complemented by more detailed guidance. Also, if the responsible AI program already has a strong focus on culture and communication, it will be easier to reach these new groups of people.” |
RAI programs effectively address the risks of third-party AI tools. Disagree |
“Responsible AI programs should cover both internally built and third-party AI tools. The same ethical principles must apply, no matter where the AI system comes from. Ultimately, if something were to go wrong, it wouldn't matter to the person being negatively affected if the tool was built or bought. However, from my experience, responsible AI programs tend to focus primarily on AI tools developed by the organization itself.
Depending on what industry you are in, it could even be more important to address the risks from third-party tools, as they might be used in high-risk contexts, such as HR. Doing this requires a different set of methods than internally built AI systems do. It also means interacting with stakeholders in parts of the organization where the knowledge level about AI and the associated risks might be lower. How do I define third-party AI tools? AI systems developed by an external vendor, including AI components as part of a product bought from an external vendor.” |
Executives usually think of RAI as a technology issue. Neither agree nor disagree | “My experience is that executives, as well as subject matter experts, often look at responsible AI through the lens of their own area of expertise (whether it is data science, human rights, sustainability, or something else), perhaps not seeing the full spectrum of it. The multidisciplinary nature of responsible AI is both the beauty and the complexity of the area. The wide range of topics it covers can be hard to grasp. But to fully embrace responsible AI, a multitude of perspectives is needed. Thinking of it as a technology issue that can be “fixed” only with technical tools is not sufficient.” |
Mature RAI programs minimize AI system failures. Agree | “For a responsible AI program to be considered mature, it should, in my opinion, be both comprehensive and widely adopted across an organization. It has to cover several dimensions of responsibility, including fairness, transparency, accountability, security, privacy, robustness, and human agency. And it has to be implemented on both a strategic (policy) and operational level (processes and tools should be fully deployed). If it ticks all these boxes, it should prevent a wide range of potential AI system failures, from security vulnerabilities to inaccurate predictions and amplification of biases.” |
RAI constrains AI-related innovation. Strongly disagree | “Quite the opposite. Firstly, as we define responsible AI at H&M Group, it encompasses using AI both as a tool for good as well as for preventing harm. This means innovation is an equally important part of responsible AI practices along with risk mitigation, as we see it. In our context, “doing good” mainly means exploring innovative AI solutions to tackle sustainability challenges, such as decreasing CO2 emissions and contributing to circular business models. Secondly, risk mitigation — “doing it right” — shouldn’t constrain innovation either. Ethically and responsibly designed AI solutions are also better AI solutions in the sense that they are more reliable, transparent, and created with the end user’s best interests in mind. And thirdly, having responsible AI policies and practices in place also creates a competitive advantage, as it reduces risk and increases trust, and builds stronger relationships with customers.” |
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Neither agree nor disagree | “It depends on the industry and how you define and work with CSR in your organization. There is a close connection between responsible AI and efforts to promote social and environmental sustainability. The overall vision should be the same, but responsible AI also needs to be treated as a separate topic with its specific challenges and goals.” |
Responsible AI should be a part of the top management agenda. Strongly agree |
“How to embrace digitalization and new technology in line with company values has to be a priority at the top management and board levels. And commitment to responsible AI should be clearly expressed by management, sending a strong message to the organization. Importantly, responsible AI must be seen as an integrated part of the AI strategy, not as an add-on or an afterthought.
With that said, being part of the top management agenda is not sufficient. Responsible AI practices and engagement also have to be built bottom-up, throughout the organization. In my opinion, the combination of these two approaches is key to succeed in creating a culture of responsible AI, making it a priority and top of mind across the company.” |