Responsible AI / Panelist

Teddy Bekele

Land O’Lakes

United States

Teddy Bekele is CTO at Land O’Lakes, a Fortune 500 farmer-owned cooperative, where he spearheads digital transformation, IT, and cybersecurity initiatives. His insights and leadership in technology have garnered recognition from publications and media outlets such as “60 Minutes,” the BBC, Fortune magazine, and MIT Sloan Management Review. He received the 2022 CIO of the Year Orbie Leadership Award from MinnesotaCIO and was named to Forbes’s CIO Next List in 2023. Bekele is chair of the Minnesota Broadband Task Force and the FCC-USDA Task Force for Precision Agriculture Connectivity and Adoption. He holds a bachelor’s degree in mechanical engineering from North Carolina State University and an MBA from Indiana University.

Learn more about Bekele’s approach to AI via the Me, Myself, and AI podcast.

Voting History

Statement Response
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Disagree “While there is growing consensus on the importance of ethical AI, true international alignment on emerging codes of conduct and standards remains incomplete. Different countries continue to grapple with diverse legal frameworks, cultural values, and enforcement mechanisms, which complicates efforts for global companies to implement RAI requirements uniformly. The landscape of AI regulations is fragmented, forcing organizations to develop robust, adaptive compliance strategies that align with the strictest standards.

Enforcing these standards globally will also be challenging due to inconsistent regulatory maturity and approaches. Striking the right balance between regulation and innovation is a key concern. Over-regulation risks stifling innovation and slowing AI advancements, while under-regulation could lead to ethical violations and a loss of public trust. The process is still evolving. Ongoing collaboration between governments, industry leaders, and international organizations is crucial to ensuring that AI development remains both innovative and ethically sound. Achieving this alignment is essential for fostering global trust in AI and maximizing their positive contributions to society.”
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Neither agree nor disagree “Organizations currently exhibit varying levels of maturity not only in their technological capabilities but also in their employees’ understanding of AI, which will influence their readiness to meet the requirements of the EU AI Act. While companies will likely meet the minimum necessary standards, the broader implications of the AI technologies they deploy may not be fully comprehended. This is complicated by the fact that different industries may require different compliance thresholds, yet it remains unclear who should define these boundaries. Furthermore, embedding compliance into the core operational ethos of an organization, similar to cybersecurity and privacy norms, will be a gradual process.

Tools might be available to assess the impact of AI models, but their development is not yet sufficient to address most scenarios comprehensively. Consequently, while organizations might align with some of the short-term requirements, predicting and preparing for future complexities presents a greater challenge. The journey toward full compliance is likely to be iterative and complex, evolving as organizations better understand both the technology and the legal landscape.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree “Organizations are struggling to keep pace with the rapid evolution of AI, often lacking a deep understanding of its capabilities and risks. This gap in awareness results in inadequate risk management frameworks that are unable to effectively mitigate AI-related risks. Additionally, many companies adopt a reactive, fear-driven approach to AI risk management rather than establishing proactive, guideline-based strategies. This not only hampers the ability to balance AI’s benefits against its potential dangers but also makes it challenging to address the uncontrollable proliferation of AI technologies. Despite some efforts to enhance risk management capabilities, the overall preparedness of organizations to tackle AI-related risks is insufficient, mainly due to the fast-paced advancements in AI that outstrip the development and implementation of effective risk management practices.”