Yan Chow joined Automation Anywhere in 2019 to help launch the intelligence automation company’s health care vertical. He was previously medical director for digital medicine in the Translational Medicine group at biotech company Amgen and CIO at health care consultancy LongView International Technology Solutions. Chow has founded and advised a number of startups in the digital health, AI, storage, and database spaces, including a venture-funded NLP analytics company that achieved a successful exit. He has a bachelor’s degree from Harvard University; a medical degree from the University of California, San Diego; and an MBA from the University of California, Berkeley’s Haas Business School.
Voting History
Statement | Response |
---|---|
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Disagree |
“The rapid pace of AI development has outpaced regulators’ ability to establish comprehensive guidelines, resulting in a patchwork of regional regulations, industry-specific practices, and voluntary frameworks that often conflict or leave gaps. Companies operating globally must navigate this complex landscape, balancing diverse cultural, ethical, and legal expectations while maintaining consistent internal practices. Recent attempts like the EU AI Act aim to address this, but enforcement remains unclear.
The complexity and opacity of AI systems also make it challenging to standardize “responsibility.” Issues like algorithmic bias, data privacy, transparency, and accountability are multifaceted and context dependent, often requiring case-by-case evaluation. The lack of agreed-upon metrics for fairness or explainability complicates efforts to create universal standards. As AI evolves, ethical implications and potential risks shift, necessitating ongoing reassessment of responsible AI. This dynamic environment leaves global companies uncertain on how to proceed and may need a set of precedents before understanding and consensus are established.” |
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Agree |
“Just as 98% of consumers want to know what’s in their packaged foods, users want to know what’s in their AI. Transparency can reveal shortcomings like biases, data privacy risk, and workforce impact. It enables customers to anticipate risks and companies to signal their commitment to responsible AI. On the other hand, while it builds customer trust, it can spill competitive secrets. How will the “need to know” be defined?
Even if disclosed, AI can be hard to explain. Independent experts may need to certify AI on behalf of the public, just as Consumer Reports tests home appliances. AI’s ubiquity also makes a formulaic approach untenable. How would a lay person judge the importance of AI transparency for a toaster versus a car? Future regulations will likely focus on health and safety, data privacy, workforce impact, sustainability, and application-specific risk. These rules could burden companies and slow down innovation. Some argue that transparency should evolve organically via market demand, with consumers choosing companies that voluntarily disclose AI usage. But AI’s opaqueness makes that problematic, especially when competitors tout opposing claims. It may take an AI to construct a level playing field.” |
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Disagree |
“The EU AI Act is a good first step toward responsible AI. However, the six-month timeline for high-risk product compliance will pose challenges for developers. AI tools will also become ubiquitous. Unless guardrails are built in, many users may create risky AI products unintentionally; on the other hand, bad actors will simply not self-report. Moreover, risk assessments may run into unexplainable “black boxes” based on proprietary algorithms. Predicting when these systems might yield risky outcomes will be problematic. As with drugs, should regulators assign proper usage for an extremely rare but significant risk in an otherwise beneficial product?
The need to revise AI risk assessments over time also imposes a heavy administrative burden. The swift pace of technology could cause AI products to jump between risk categories faster than regulators can conduct full validation. One example might be a new technology that improves the safety of AI systems, resulting in a broad reassessment of many products. Without a significant increase in investment, the EU may be implementing a well-meaning but potentially ineffective framework that struggles to keep up with AI.” |
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree |
“AI is becoming a playmaker for future technologies. But AI’s data security can be compromised, often by weaponized AI, like deepfakes and automated attacks. Fighting fire with fire, generative AI and automation empowers infosec teams to respond quickly, accurately, and scalably. Unfortunately, fast-moving technologies require time and experience to be understood. This time lag is an asymmetric window of opportunity hackers can exploit repeatedly.
AI can exhibit bias. Errors in analytics incorporated into AI algorithms have led to discriminatory outcomes in hiring, loan approvals, and health coverage. Watchdogs are now calling for closer scrutiny of data inputs. Yet organizations have neither the expertise, resources, nor industry guidance to address this issue. AI also introduces complex legal and ethical questions, such as the rights of individuals and the liability for harm caused by AI. Job displacement and workforce restructuring raise ethical and socioeconomic concerns. Worker reskilling and responsible AI inevitably come into focus. Organizations in the early stages of AI adoption have limited options. AI was created to understand and help us. It may take AI to understand its own risks.” |