Skip to main content

Speakers

For Business

Training 2 or more people?

Get your team access to the full DataCamp library, with centralized reporting, assignments, projects and more
Try DataCamp for BusinessFor a bespoke solution book a demo.

Data Literacy for Responsible AI

December 2021
Share

70% of customers expect organizations to provide AI interactions and products that are transparent and fair (Capgemini). Now more than ever, organizations need to govern deployed AI systems to minimize harm for end-users and organizational risk.

In this webinar, the VP of Trusted AI at DataRobot Ted Kwartler, DataRobot’s Global AI Ethicist Haniyeh Mahmoudian, and DataCamp’s Adel Nehme will outline:

  • The importance of developing responsible AI and what it means for organizations today

  • Practical solutions data teams and organizations can adopt to mitigate risk in AI systems

  • The crucial role data literacy plays when scaling responsible AI and aligning stakeholders on AI Governance frameworks

Summary

As AI technologies continue to grow rapidly, the need for responsible and ethical AI use has become increasingly urgent. It has been recognized by organizations and society that AI systems, if not carefully managed and designed, can enforce biases and propagate discrimination. Key representatives from DataRobot and Datacamp, including Ted Kortler, Ania Mahmoudian, and Adel Nemeh, articulated the importance of AI ethics, algorithmic bias, and data literacy as key factors in resolving these issues. They examined how AI systems can unintentionally lead to "algorithmic victimization," where well-designed models amplify existing societal problems, such as racial bias in credit scoring or facial recognition technologies. The webinar also explored the necessity of strong governance frameworks, which necessitate interdisciplinary cooperation and standardized evaluation processes to minimize the risks associated with AI deployment. Ania Mahmoudian highlighted the subtleties of AI fairness, differentiating between fairness by representation and fairness by error, and outlined various bias mitigation techniques applicable at different stages of the AI model lifecycle. Adel Nemeh emphasized the importance of data literacy in promoting responsible AI, underlining its role in creating a common language among stakeholders to ensure ethical AI practices. The discussion emphasized the need for comprehensive AI governance, ongoing education, and awareness of emerging regulatory scenarios as vital steps towards achieving responsible AI.

Key Takeaways:

  • Responsible AI involves addressing both technical and ethical challenges, with a focus on reducing algorithmic bias.
  • Strong governance frameworks involving interdisciplinary cooperation are essential for ethical AI deployment.
  • Data literacy has a significant role in promoting ethical AI practices and creating a common understanding among stakeholders.
  • Fairness in AI can be defined in terms of representation or error, and different techniques can be used to reduce bias.
  • Understanding emerging regulatory scenarios is vital for organizations deploying AI technologies.

Deep Dives

Algorithmic Bias and Its Societal Impacts

As AI technologies become ...
Read More

more integrated into everyday life, algorithmic bias remains a significant concern. Speakers emphasized the importance of recognizing how AI models can unintentionally propagate existing biases, leading to what they called "algorithmic victimization." Examples discussed included AI systems in healthcare that may enforce racial biases or financial algorithms that exhibit gender disparities in credit scoring. Ted Kortler noted, "AI has great benefits, but we must be aware of systemic and misbehaving outputs." The societal implications of these biases are profound, affecting everything from job opportunities to access to essential services. The speakers urged organizations to proactively address these biases by implementing strong governance frameworks and promoting a culture of ethical AI development.

Governance Frameworks for Ethical AI

The development and deployment of ethical AI systems necessitate comprehensive governance frameworks. The webinar highlighted the need for an interdisciplinary approach, combining expertise from data scientists, legal teams, and business stakeholders. Governance frameworks should include standardized evaluation processes, risk assessments, and compliance documentation. Ted Kortler emphasized that "proper governance involves understanding the trade-off between value and risk and planning accordingly." The speakers also stressed the importance of aligning AI development with emerging regulatory requirements, such as the EU's Artificial Intelligence Act, to ensure compliance and minimize potential legal challenges.

Data Literacy as a Fundamental Aspect of Responsible AI

Data literacy emerged as a significant theme in the discussion on responsible AI. Adel Nemeh described data literacy as "the ability to understand data science applications and drive data-driven decisions at scale." He argued that data literacy enables a common language among stakeholders, facilitating cooperation and ensuring that all parties involved in AI projects are aligned in their understanding of AI's potential impacts. By promoting data literacy, organizations can enable their workforce to engage in ethical AI practices, identify biases, and make informed decisions. The speakers also highlighted the role of upskilling initiatives in narrowing the data literacy gap, with significant investments being made in AI and data education across industries.

Bias Mitigation Techniques in AI Models

Ania Mahmoudian provided insights into various techniques for reducing bias in AI models. She explained that bias can be addressed at different stages of the AI model lifecycle, including pre-processing, in-processing, and post-processing. Each stage offers unique opportunities to reduce bias, whether through data sampling, fairness constraints, or adjusting prediction thresholds. Mahmoudian emphasized the importance of selecting appropriate techniques based on the specific context and available data, noting that "in-processing techniques often preserve accuracy while promoting fairness." The discussion highlighted the complexity of bias mitigation and the need for ongoing evaluation and refinement of AI models to achieve equitable outcomes.


Related

infographic

Data Literacy for Responsible AI

Learn how data literacy fuels responsible AI

white paper

Data Literacy for Responsible AI

Learn how data literacy is the currency that powers responsible use of AI

white paper

The Learning Leader's Guide to AI Literacy

Find out how learning leaders should be approaching AI literacy within their organization, focusing on the what, why, and how of fostering organization-wide AI literacy.

white paper

The Learning Leader's Guide to AI Literacy

Find out how learning leaders should be approaching AI literacy within their organization, focusing on the what, why, and how of fostering organization-wide AI literacy.

webinar

Spreading Data & AI Literacy Across Your Organization

Learn how to devise a data and AI strategy that aligns with your business strategy, and how to combine technology and training to increase the data and AI literacy across your company for business success.

webinar

Spreading Data & AI Literacy Across Your Organization

Learn how to devise a data and AI strategy that aligns with your business strategy, and how to combine technology and training to increase the data and AI literacy across your company for business success.

Hands-on learning experience

Companies using DataCamp achieve course completion rates 6X higher than traditional online course providers

Learn More

Upskill your teams in data science and analytics

Learn More

Join 5,000+ companies and 80% of the Fortune 1000 who use DataCamp to upskill their teams.

Don’t just take our word for it.