Get AI Audit Ready!
Key Takeaways:- Learn how ISO 42001, the EU AI Act, and NIST AI RMF shape AI governance.
- Understand how to identify risks and prepare for AI audits.
- Discover best practices for building compliance and responsibility into your AI strategy.
Description
AI governance is rapidly evolving, and organizations are now expected to prove that their systems are safe, trustworthy, and compliant. The arrival of ISO 42001—the world’s first AI management system standard—alongside frameworks like the EU AI Act and NIST’s AI Risk Management Framework, marks a turning point for AI accountability. Understanding these standards is now essential for every company deploying AI responsibly.
In this panel interview, Avani Desai, CEO at Schellman, Adrián González Sánchez, Global AI Architect at Microsoft, Lee Bristow, Director of Cyber & AI Governance at Saros Consulting, Yemi Akinrele, AI Governance & Privacy Counsel at Echolinks Solutions, will break down what ISO 42001 means for your organization and how it connects to other major frameworks. You’ll learn what to expect from an AI audit, how to assess and mitigate risks, and how to embed responsible AI principles into your company’s operations. This session is ideal for AI, IT, and cybersecurity managers preparing for the next wave of regulation and governance.
Presenter Bio

Avani runs Schellman, a compliance services company, which audits the AI capabilities of OpenAI, Meta, and Walmart. She has two decades of experience as an executive in technology risk, cybersecurity, and compliance assessment.

As a Global AI Architect, Adrián provides AI architecture, security, and compliance support to startups and digital natives. He is also part of the EU AI Act Expert Committee for the Spanish Government, a Responsible AI Lead at OdiseIA, and a certified lead auditor for the ISO 42001 standard.
He is an Academic Director and ML Professor at IE School, and a Lecturer in Executive Education at HEC Montréal. He has written 5 books on AI and cloud computing, including "Azure OpenAI Service for Cloud Native Applications" and “Managing AI Projects”. He is also author of several DeepLearning.ai and LinkedIn Learning courses, including one for “Generative AI Compliance and Regulations”.

Lee provides strategic guidance to clients on cybersecurity and AI governance, including gap analysis for ISO 42001 and EU AI Act compliance. He is also CEO at the Dawn Horizon AI strategy consultancy, and CEO at security consultancy Risk Copilot. Lee is the author of "Human AI Alliance", an AI transformation guide for leaders. Previously, Lee was Chief Technology & Information Security Officer at Phinity Risk Solutions.

Yemi acts as legal counsel for AI governance, privacy, and business issues, with a focus on AI adoption and regulatory compliance. She helps clients implement AI governance frameworks like ISO 42001, the EU AI Act, and NIST AI RMF. Previously, Yemi was a Partner at FA Legal Consultants, where she advised clients on data privacy and ethical AI adoption.