Skip to main content

Fill in the details to unlock webinar

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.

Speakers

For Business

Training 2 or more people?

Get your team access to the full DataCamp library, with centralized reporting, assignments, projects and more
Try DataCamp For BusinessFor a bespoke solution book a demo.

US AI Regulations vs. The EU AI Act

January 2025
Webinar Preview
Share

Session Resources

Summary

The European Union's AI Act is a significant piece of legislation that addresses the development and use of AI across the EU. This leading law has influenced AI regulations globally, including in the United States, where states like Colorado and Tennessee have introduced their own AI-related laws. For companies operating in both regions, understanding these regulations is essential. The EU AI Act distinguishes between high-risk and low-risk AI applications, imposing stricter requirements on high-risk activities to safeguard individuals' rights. Developers and deployers (those who bring AI to market) have specific roles and obligations under this framework. The U.S. approach, while not having a comprehensive federal AI law, relies on existing laws that can address AI-related issues, highlighting consumer protection and privacy laws. This complex regulatory environment requires companies to conduct rigorous risk assessments, maintain transparency, and ensure compliance with both EU and U.S. standards. Expert guests, Odia Kagan and Julie Honor, emphasize the importance of understanding the implications of AI regulation and suggest practical steps for organizations to effectively handle this intricate environment.

Key Takeaways:

  • The EU AI Act classifies AI applications by risk level, imposing stricter regulations on high-risk uses.
  • The U.S. currently addresses AI issues through existing consumer protection and privacy laws.
  • Companies must conduct risk assessments to determine their AI's impact and comply with regulations.
  • Transparency in AI usage is essential for compliance and maintaining consumer trust.
  • Stakeholders across organizations should be involved in understanding and implementing AI regulations.

In-Depth Analysis

EU AI Act Overview

T ...
Read More

he European Union AI Act represents a comprehensive regulatory approach to AI, distinguishing itself by being a standalone law dedicated to AI governance. Crafted over several years, it categorizes AI applications based on risk, with high-risk applications facing stringent obligations. These include risk assessments, transparency requirements, and compliance measures akin to those seen in the General Data Protection Regulation (GDPR). Developers and deployers, the main actors within the AI ecosystem, have specific duties, with developers bearing more responsibilities. The Act's goal is to ensure safety in AI innovation, avoiding potential adverse impacts on fundamental rights. "The point of legislation is to make sure that you can innovate, but do it safely," noted Julie Honor. However, while the Act is in effect, its enforcement is phased, allowing organizations time to adapt. The EU AI Act is a leading model, influencing global AI regulation, yet its comprehensive nature poses challenges, particularly for smaller enterprises.

U.S. AI Regulatory Framework

In contrast to the EU, the U.S. lacks a comprehensive federal AI regulation, instead leveraging existing laws to address AI-related issues. This practical approach sees consumer protection and privacy laws as key tools in AI governance. Odia Kagan highlighted that in the U.S., "normal laws apply" to AI, meaning actions illegal for humans remain illegal when performed by AI. This includes privacy, competition, and employment laws, among others. The Federal Trade Commission (FTC) plays an important role, enforcing regulations against unfair or deceptive practices involving AI. State-level laws, such as the Colorado AI Act, provide additional governance, with focus areas including discrimination and consumer rights. This decentralized approach allows for flexibility but can result in a mix of regulations that organizations must manage.

Risk Assessment and Transparency

Conducting thorough risk assessments is essential for organizations deploying AI technologies. This involves understanding the AI's functions, the data it processes, and its potential impact on individuals' rights. Odia Kagan emphasized the importance of transparency, stating, "You need to understand what it is that you are deploying." Organizations must ensure AI systems do what they claim and assess whether they can achieve objectives with less invasive means. Transparency involves informing users about AI's role and impact, particularly in high-risk areas such as employment and credit decisions. This transparency is integral to compliance and consumer trust, forming a fundamental part of both EU and U.S. regulatory frameworks.

Organizational Governance and Compliance

Successful management of AI regulations requires cross-departmental collaboration within organizations. From HR to finance, each department may use AI differently, necessitating a broad-based governance approach. Establishing AI literacy and training programs is essential, particularly in the EU, where literacy requirements become enforceable. Julie Honor pointed out that "compliance deadlines are coming in 2026," urging organizations to act now. Governance structures, such as risk committees or AI-specific subcommittees, can help manage compliance efforts. These bodies must balance innovation with regulatory obligations, ensuring AI technologies align with company values and legal requirements. The focus on governance highlights the importance of embedding regulatory compliance into the organizational culture.


Related

webinar

Understanding Regulations for AI in the USA, the EU, and Around the World

In this session, two experts on AI governance explain which AI policies you need to be aware of, how governments are treating AI regulation, and how you need to deal with them.

webinar

The EU AI Act: How Will It Affect Your Business?

Dan Nechita, EU Director for the Transatlantic Policy Network, and Lily Li, Founder and Lawyer at Metaverse Law, explain what the legislation involves, how it will affect your business, and how to comply with the legislation.

webinar

EU AI Act Readiness: Meeting Your Organization's AI Literacy Requirements

Anandhi, CRAIO at Esdha and Will, a Senior Associate at Ashurst, teach you how you and your organization can comply with the AI literacy clause of the EU AI Act.

webinar

Best Practices for Developing Generative AI Products

In this webinar, you'll learn about the most important business use cases for AI assistants, how to adopt and manage AI assistants, and how to ensure data privacy and security while using AI assistants.

webinar

Scaling AI Adoption in Financial Services

Explore regulatory AI initiatives in financial services and how to overcome them

webinar

Empowering Government with Data & AI Literacy

Richard Davis, CDO at Ofcom, discusses how government agencies can cultivate a culture that puts data-driven decision making and the responsible use of technology at the center.