Hoppa till huvudinnehåll

Fyll i uppgifterna för att låsa upp webbinariet

Genom att fortsätta accepterar du våra Användarvillkor, vår Integritetspolicy och att dina uppgifter lagras i USA.

Share this webinar

Close your data and AI skills gap

We're the only platform uniquely engineered to advance data and AI skills across your entire organization. Let's explore a tailored program.

Book an Enterprise Demo
Upskilling a small team?Get started today
AI for Business

US AI Regulations vs. The EU AI Act

January 2025
Webinar Preview

Your Presenter(s)

Porträttbild av Odia Kagan

Odia Kagan

Partner and Chair of Data Privacy Compliance and International Privacy at Fox Rothschild LLP

Odia is a legal partner with a focus on privacy and data security. She advises clients on issues regarding to compliance with laws and regulations in the privacy and data security fields in the as well as on privacy aspects of acquisition transactions, joint ventures and engagements of third party vendors. She is an Advisory Board Member for the International Association of Privacy Professionals, and a Chapter Chair of OneTrust PrivacyConnect.

Porträttbild av Julie Honor

Julie Honor

Counsel at Thompson Hine

Julie is former general counsel with a focus on technology, commercial transactions, and change management. Her work focuses on advising clients on the use of artificial intelligence (AI) and generative AI within their businesses, including drafting corporate policies and developing internal training programs, and advising development teams on compliance considerations when building AI technology. Julie is also an advisory board member to movement analytics company Hx Innovations. Previously Julie was General Counsel at 3Q.

Session Resources

Summary

The European Union's AI Act is a significant piece of legislation that addresses the development and use of AI across the EU. This leading law has influenced AI regulations globally, including in the United States, where states like Colorado and Tennessee have introduced their own AI-related laws. For companies operating in both regions, understanding these regulations is essential. The EU AI Act distinguishes between high-risk and low-risk AI applications, imposing stricter requirements on high-risk activities to safeguard individuals' rights. Developers and deployers (those who bring AI to market) have specific roles and obligations under this framework. The U.S. approach, while not having a comprehensive federal AI law, relies on existing laws that can address AI-related issues, highlighting consumer protection and privacy laws. This complex regulatory environment requires companies to conduct rigorous risk assessments, maintain transparency, and ensure compliance with both EU and U.S. standards. Expert guests, Odia Kagan and Julie Honor, emphasize the importance of understanding the implications of AI regulation and suggest practical steps for organizations to effectively handle this intricate environment.

Key Takeaways:

  • The EU AI Act classifies AI applications by risk level, imposing stricter regulations on high-risk uses.
  • The U.S. currently addresses AI issues through existing consumer protection and privacy laws.
  • Companies must conduct risk assessments to determine their AI's impact and comply with regulations.
  • Transparency in AI usage is essential for compliance and maintaining consumer trust.
  • Stakeholders across organizations should be involved in understanding and implementing AI regulations.

In-Depth Analysis

EU AI Act Overview

The European Union AI Act represents a comprehensive regulatory approach to AI, distinguishing itself by being a standalone law dedicated to AI governance. Crafted over several years, it categorizes AI applications based on risk, with high-risk applications facing stringent obligations. These include risk assessments, transparency requirements, and compliance measures akin to those seen in the General Data Protection Regulation (GDPR). Developers and deployers, the main actors within the AI ecosystem, have specific duties, with developers bearing more responsibilities. The Act's goal is to ensure safety in AI innovation, avoiding potential adverse impacts on fundamental rights. "The point of legislation is to make sure that you can innovate, but do it safely," noted Julie Honor. However, while the Act is in effect, its enforcement is phased, allowing organizations time to adapt. The EU AI Act is a leading model, influencing global AI regulation, yet its comprehensive nature poses challenges, particularly for smaller enterprises.

U.S. AI Regulatory Framework

In contrast to the EU, the U.S. lacks a comprehensive federal AI regulation, instead leveraging existing laws to address AI-related issues. This practical approach sees consumer protection and privacy laws as key tools in AI governance. Odia Kagan highlighted that in the U.S., "normal laws apply" to AI, meaning actions illegal for humans remain illegal when performed by AI. This includes privacy, competition, and employment laws, among others. The Federal Trade Commission (FTC) plays an important role, enforcing regulations against unfair or deceptive practices involving AI. State-level laws, such as the Colorado AI Act, provide additional governance, with focus areas including discrimination and consumer rights. This decentralized approach allows for flexibility but can result in a mix of regulations that organizations must manage.

Risk Assessment and Transparency

Conducting thorough risk assessments is essential for organizations deploying AI technologies. This involves understanding the AI's functions, the data it processes, and its potential impact on individuals' rights. Odia Kagan emphasized the importance of transparency, stating, "You need to understand what it is that you are deploying." Organizations must ensure AI systems do what they claim and assess whether they can achieve objectives with less invasive means. Transparency involves informing users about AI's role and impact, particularly in high-risk areas such as employment and credit decisions. This transparency is integral to compliance and consumer trust, forming a fundamental part of both EU and U.S. regulatory frameworks.

Organizational Governance and Compliance

Successful management of AI regulations requires cross-departmental collaboration within organizations. From HR to finance, each department may use AI differently, necessitating a broad-based governance approach. Establishing AI literacy and training programs is essential, particularly in the EU, where literacy requirements become enforceable. Julie Honor pointed out that "compliance deadlines are coming in 2026," urging organizations to act now. Governance structures, such as risk committees or AI-specific subcommittees, can help manage compliance efforts. These bodies must balance innovation with regulatory obligations, ensuring AI technologies align with company values and legal requirements. The focus on governance highlights the importance of embedding regulatory compliance into the organizational culture.


Släkt

webinar

Understanding Regulations for AI in the USA, the EU, and Around the World

In this session, two experts on AI governance explain which AI policies you need to be aware of, how governments are treating AI regulation, and how you need to deal with them.

webinar

The EU AI Act: How Will It Affect Your Business?

Dan Nechita, EU Director for the Transatlantic Policy Network, and Lily Li, Founder and Lawyer at Metaverse Law, explain what the legislation involves, how it will affect your business, and how to comply with the legislation.

webinar

EU AI Act Readiness: Meeting Your Organization's AI Literacy Requirements

Anandhi, CRAIO at Esdha and Will, a Senior Associate at Ashurst, teach you how you and your organization can comply with the AI literacy clause of the EU AI Act.

webinar

Best Practices for Developing Generative AI Products

In this webinar, you'll learn about the most important business use cases for AI assistants, how to adopt and manage AI assistants, and how to ensure data privacy and security while using AI assistants.

webinar

Scaling AI Adoption in Financial Services

Explore regulatory AI initiatives in financial services and how to overcome them

webinar

Empowering Government with Data & AI Literacy

Richard Davis, CDO at Ofcom, discusses how government agencies can cultivate a culture that puts data-driven decision making and the responsible use of technology at the center.