Skip to main content

What You Need to Know About the EU AI Act with Dan Nechita, EU Director at Transatlantic Policy Network

Adel and Dan explore the EU AI Act's significance, risk classification frameworks, organizational compliance strategies, the intersection with existing regulations, AI literacy requirements, the future of AI legislation, and much more.
Nov 28, 2024

Photo of Dan Nechita
Guest
Dan Nechita
LinkedIn

Dan Nechita led the technical negotiations for the EU Artificial Intelligence Act on behalf of the European Parliament. For the 2019-2024 mandate, besides artificial intelligence, he focused on digital regulation, security and defense, and the transatlantic partnership as Head of Cabinet for Dragos Tudorache, MEP. Previously, he was a State Counselor for the Romanian Prime Minister with a mandate on e-governance, digitalization, and cybersecurity. He worked at the World Security Institute (the Global Zero nuclear disarmament initiative); at the Brookings Institution Center of Executive Education; as a graduate teaching assistant at the George Washington University; at the ABC News Political Unit; and as a research assistant at the Arnold A. Saltzman Institute of War and Peace at Columbia. He is an expert project evaluator for the European Commission and a member of expert AI working groups with the World Economic Forum and the United Nations. Dan is a graduate of the George Washington University (M.A.) and Columbia University in the City of New York (B.A.).


Photo of Adel Nehme
Host
Adel Nehme

Adel is a Data Science educator, speaker, and Evangelist at DataCamp where he has released various courses and live training on data analysis, machine learning, and data engineering. He is passionate about spreading data skills and data literacy throughout organizations and the intersection of technology and society. He has an MSc in Data Science and Business Analytics. In his free time, you can find him hanging out with his cat Louis.

Key Quotes

We ended up with a very good, very good piece of legislation. You can see that from the fact that it's started to be copied all over the world now, and I'm sure it's going to have a tremendous impact around the world as more and more states with different regulatory traditions will look into, what the rules are for AI.

There's a rational explanation for the EU AI act, once you have safer artificial intelligence systems, then you have more trust in the technology. More trust means more adoption. More adoption then means digital transformation, economic growth and so on.

Key Takeaways

1

Understand the EU AI Act's risk classification framework to determine how your AI systems are categorized and what regulatory obligations apply, ensuring compliance and mitigating potential risks.

2

If deploying AI systems, conduct a fundamental rights impact assessment to ensure the system performs as intended without discrimination, especially in sensitive contexts like law enforcement or education.

3

For organizations using general purpose AI models, establish a clear understanding of your responsibilities in the AI value chain, especially if customizing models for high-risk use cases.

Links From The Show

Transcript

Adel Nehme: Dan Nechita, it's great to have you on the show.

Dan Nechita: Thanks Adele. Good to be here. Happy to be on.

Adel Nehme: Yeah, very excited to be chatting with you. So you are the lead technical negotiator for the EU AI Act. You also served as the head of cabinet for European Parliament member, Dragoš Tudoráčia. I really hope I pronounced that name correctly. So maybe to set the stage with the EU AI Act officially coming into effect on August 1st, 2024.

Could you start by giving us a bit of an overview? What exactly is the UAI Act? Why is it such a significant piece of legislation in the world of AI?

Dan Nechita: Well, I hope you have quite a lot of time to give you an overview of it, but in brief, I think, you know, it's, it's of course, very important because it's the first. really hard law that sets rules for artificial intelligence. And it does that throughout the European Union. So the entire European market it's a very significant market.

And I think, to give you a You know, the gist of it, it's a 150 page document after the negotiations but trying to capture that, I would say it aims to make artificial intelligence used in the European Union more human centric and safer to use. That's the gist of it. That is, Of course, there is a rational explanation for this.

Once you have safer artificial intelligence systems, then you have more trust in the technology. More trust means more adoption. More adoption then means you know, digita... See more

l transformation, economic growth and so on. But, in one sentence, I would say that its main goal is to protect health, safety and fundamental rights in the European Union when artificial intelligence systems are used.

Adel Nehme: Okay, that's really great. You know, you mentioned it's a, we do have some time, And you mentioned that there is, it's a 150 page document. man. walk us through what are some of the guiding principles or like key foundations that shape the AI act, like what are the EU's principles essentially towards regulating AI?

Dan Nechita: Well, this is a, this is a product safety type of regulation, meaning that before placing on the market a product, let's assume, you know, like a rubber ducky or an elevator, that that product needs to be safe enough for it to be used. And it follows the same logic. Now for artificial intelligence this is and this is one of the first misconceptions that I have to to go against when, when discussing the AI act for artificial intelligence.

This is only for those systems that pose. risks to health, safety, and fundamental rights. So basically, it's systems that could impact the lives and the rights of natural persons and most of the rules. And we'll get to that a little bit later. I think the most of the rules there apply to those systems.

It doesn't, you know, if you're thinking about an AI system used in agriculture to decide the quantity of water for, for tomatoes. The AI doesn't really impact that doesn't have any obligations. It's really, when you get into the systems. That decide things for people that could impact their fundamental rights when the rules come into play.

Adel Nehme: Okay, and that I think segues really well to my next question is because I think how you classify risk in the EU AI act is really important towards the type of regulation protection set into place when it comes to that use case. So maybe walk us through risk classification, the risk classification framework set out by the UAI Act.

How are AI systems categorized and what are the implications here for both developers and users these types of technologies, depending on the use case?

Dan Nechita: Okay, so here I'll get a little bit more concrete. So from the broad picture to a more, specific picture. The AI Act addresses, you know, mainly high risk AI systems and we can call these, you know, high risk because that's a denomination. We could also call them class one or class two or. it's a special category of systems, but it does address risk on a gradient, from unacceptable risk to no risk.

And there are for, I would say, broad categories in which AI systems are, are being classified. The first one is unacceptable risk. For this, we use the most blunt instrument possible, which is prohibitions, and those you know, unacceptable risk. From using AI systems is really something that we don't want in the European Union.

It's contravening to our values, it really has no use in, in the European Union. For example mass surveillance or social scoring, like we see in authoritarian states. that discriminates against everybody based on a social score these types of systems are outright prohibited.

There's a limited, and it's not really the system, it's more the use in these scenarios that's prohibited. These are very few of course, because prohibitions are very, very raw and blunt instruments. Then imagine, of course, the famous pyramid of risk, if this is the top, then the next layer would be the high risk AI systems.

And these are really systems that could impact health safety or fundamental rights in a significant way. So we're talking about systems dealing with biometrics we're talking about systems using critical infrastructure. In education, in employment, to make decisions about employment relationships, in law enforcement in migration context, for example, in asylum seeking, in the administration of justice, in democratic processes.

And these are rather discrete categories of use cases that were studied before and were identified as having potential risks to health, safety, and fundamental rights. Then as you move down, you have limited risk systems who or use cases that are not directly, necessarily posing significant risk, but they could have an impact.

For example, and these are, fairly standard to To explain, for example, chatbots and deepfakes, where, of course, really depends how these are used. It's not that they have an inherent risk. But for those limited risk use cases we found it appropriate to have some transparency requirements.

Now, if you're chatting with the chatbot as an end user, you need to be informed that, this is an AI system, not a person on the other end of your conversation. And then of course the bulk of AI systems out there are no risk or minimal risk. And like the example that I gave before in agriculture, but you can think of many other industrial applications of AI.

where there is no risk involved to health, safety, or fundamental rights, and those are, pretty much unregulated with a small exception that, probably we'll talk about a little bit later. So that is the broad classification, but now your question had multiple dimensions. So I'll go to the next part, which is Of course, AI has evolved and we as, regulators have tried to keep up with the evolution of artificial intelligence, making sure that the regulation is also future proof and applies also into the future.

So when dealing with very powerful artificial intelligence we looked rather at the models powering them. And here we're talking about, the GPT 4s and the clods and. And Lama models that are by now, you know, at the frontier of artificial intelligence. And for those there is a possibility that a certain type of risk, a systemic risk could materialize without it being in any of the other categories before and a systemic risk.

It really comes about when those, models are then used and deployed in different systems downstream all across, an economy or across the European Union. For those there is a different approach because of course we're talking about the frontier of AI. Which is a more co regulator, co creative approach to regulation.

That is, we've established within the European Commission, a body an artificial intelligence office, it's called, that is meant to interact with those who build those very powerful systems, and basically supervise them in a dynamic way. In terms of risk monitoring, in terms of reporting incidents when these incidents happen.

And so on. And, you know, we can get into this a little bit more later on. And then I do recall that you were, asking about providers versus deployers. The providers are those who are building the systems. it's your, let's say Microsoft, or it could be an SME who's building an AI system.

And many of the regulations to make those systems safe rest with them because, you're there when you build it, so you have some obligations to make the system safe. Then, on the other hand, the user is the entity deploying that system on the market. And here, You really get very concrete in what I was saying earlier, you know, law enforcement, migration, and so on.

These would be public authorities for education, it would be schools it could be institutions, it could be basically anybody who uses those systems. They have less obligations. But still, because they are the ones using it in certain, scenarios, they have a set of limited obligations to make sure on their end, once the system is in their hands that the system is safe.

Ha,

Adel Nehme: that's really great. And there's a lot of times back here. So first, let's think about the risk levels, right? So you mentioned unacceptable risk, high risk systems, limited risk systems, and essentially no risk systems like the ones that you find in agriculture, etc. So maybe when you think about the different levels of risk here, what are the implications from a regulatory perspective if you're an organization like rolling out use cases that could fit these different risk criteria?

I know no risk, there's no regulation here, but maybe walk me through what are the implications from a regulatory perspective if I'm an organization looking to maybe look at a limited risk system, for example, which I think where, Maybe the majority of use cases sit if you're an average organization.

Dan Nechita: Recall, I was saying this is a very long and, complex regulation, right? So I think it is case by case, but I'll try to generalize in terms of the implications. And I'll, I'll walk through all of them. I think that in prohibitions, it's fairly clear, right? Like if you engage in one of those prohibited practices well, you don't want to do that for sure.

 The fines and the consequences for that are really, really really high and are meant to be very, very dissuasive if you're building a high risk case system, now we're talking about those building, then you have a certain set of obligations that are comprised in a, conformity assessment that you have to fulfill before placing that, that system on the market.

And that deals with. Having a risk management system in place having good data quality, good data governance, maintaining logs, maintaining documentation, ensuring that, you know, you have an adequate level of cyber security for the system and so on. So those are obligations if you build it. If you use it, you have number of obligations depending on, the type of entity that we're talking about.

So public authorities and those who are providing essential services. So of course that would be, you know, have the highest impact on, on fundamental rights. They have the obligation to conduct a fundamental right impact assessment before putting the system into use. And that fundamental right impact assessment is in a sense a parallel to the data protection impact assessment that comes from the GDPR, but it is really focused on Making sure that within a specific context of use, the AI system, when you use it really performs in accordance with what it was supposed to do.

So say, for example, you use it in a neighborhood that is you know, mostly immigrants, is the system really going to perform the same? As if you were to use it in your average neighborhood, and if not, you know, what are the corrective measures that you're taking to ensure that it doesn't lead to discrimination.

Then the limited risk in a sense, that's very simple on the provider side. So those who build those chat bots, those systems that can generate deep fakes. And AI that generates content need to make sure that the content is labeled. And that's, that's on the provider side, on the deployer side, of course, there is also a best effort obligation this happens. in the AI value chain, there is a lot more actors here. So there's different targeted obligations for different actors. So say for example you Taken off the shelf system and then you customize it to fit your company's needs and whether you know, by intent or by just the way you, you deploy it, that becomes a high risk AI system.

Then you take on the responsibilities from the previous provider, because of course, once you've modified it, you've changed intended purpose of the system. I think that for, to also go broader than the technical explanation, I think that the general implication for those who deploy AI would be to check their systems against the use cases in the regulation that are considered to be high risk.

If it is, then of course you need to follow all the rules in there applicable. But. Also, I would say if it's in the gray area in a gray area that might or might not be, look, the intention behind the AI act is to make those systems safer safe means more trust. It means a competitive edge on the market.

None of these are, considered bad systems. It's just that they're made safer. So in the gray areas around those use cases, I think that the implication would be to try to get. As close to conformity as possible. But that's, you know, a whole nother story how to do that.

Adel Nehme: We're definitely going to expand that as well. I'm curious to see your point of view on what type of use cases classify under what type of risk? So I'll give you an example. If you're An organization that built with its data science team an internal data science model looking at customer churn, for example, so you're looking at customer data, it's meant to understand internally who is most likely going to churn from your service and who should we target with ads, for example, because here you are using demographic data and usage data, but the application of it like the end stream is just it's targeted emails, for example.

So how would you think about it from a level of risk or how would this fit from a level of risk in the EU AI Act framework?

Dan Nechita: it's a good example. And I'm sure that there's multiple examples that are in this, category. I would say at first sight, is that. By way of the AI Act, this is not a high risk AI system. Now it really depends. It really depends the application.

Where exactly are you doing that? And what kind of decisions are you making with that? And the easiest way is to check what is exactly, because there is a discrete list of use cases that are considered high risk. What is exactly the use of that particular AI system? So say, for example your customers are students, students getting admitted into universities, right?

And you let the AI system decide which student goes to which university and that's your customer, right? It's the same AI system that you, you presented only now you're using it in making decisions about. assignment to different educational institutions. Now, in that case, it's a high risk AI system. However, if you use it for example, to target your shoe customers in your shoe store, and you're selling running shoes, and you're targeting different customers based on the data using AI in this particular case, it's not a high risk AI system.

Adel Nehme: So the algorithm itself is not necessarily what determines risk, like the impact that the predictions have at the end. Just to clarify from a kind of how to conceptually think about it.

Dan Nechita: Yes, exactly. So the whole regulation is based on the concept of intended use. This is not a perfect, this is not a perfect basis because some systems are off the shelf and you use them as they are. But most of the times in the real world, you'll have a provider working with a deployer to customize an AI system to a specific use, So if, you know, again, I'm going into the world of public authorities I'm sure that they don't buy off the shelf solutions. Most of the time they work with the provider. for customized solutions. So then they know the intended purpose. And that's when the logic of, okay, this is going to be used in a high risk use case, it comes into play, activates the obligations for those who build it and those who, who use it as well.

Well,

Adel Nehme: about general models. I found that very, very interesting. I think you really well defined here that there's a systemic risk with, these general models, things like the clods, the GPT 4Os of the world, et cetera, because they're inherently very dynamic.

They're not mechanistic. They can create output that goes against the use case that they're trying to create, for example. And you mentioned that there's a kind of a core regulation approach there. But I'm curious as well, if a general model is used on a high risk use case. Would that fall under the high risk AI system, for example, or does that go through a different kind of regulatory route as well?

Dan Nechita: we, struggled with that quite a bit because it's a hard question, right? So say for example, now all of a sudden you decide as a police officer to ask ChatGPT whether or not to arrest somebody or not and you make decisions based on that. It would be really, really hard to put the fault on the makers of, you know, your chatbot because you've decided to misuse it.

So the whole logic is addressed in the value chain, as I was explaining earlier. And we tried to cover every single possible scenario by thinking of substantial modifications. Including in the intended use. So for example, if one takes general purpose model and customizes it for a specific use case that is high risk, then the responsibility rests.

with the person or with the entity that customized that for the high risk use case. Nevertheless these were kind of hand in hand because what if you are, fulfilling all of your obligations on one hand, but then your original model is significantly biased towards women, for example.

You cannot control that in your, you know, fulfilling your responsibilities. So there are obligations for those, now there are basically, two providers to also cooperate in making sure that they fulfill the obligations based on whoever can do what in the set of obligations. in, when we get into, discussions about models and especially the general purpose models, there are, a set of obligations that apply to all of them, which are fairly light touch, but that are meant to help this, division of responsibilities along the value chain.

So, for example, even if they're not a model with systemic risk, general purpose AI models above a certain size. need to maintain good documentation and to pass it down to those who then further integrate them into, into their systems. They also need to to comply with current copyright provisions, for example, when building their models.

And then when you get to the really, really powerful models, the ones with systemic risk, and so this is before anybody takes them and uses them they need to do model evaluations, including adversarial testing. They need to have a system for incident reporting in place, and they need to work.

with the commission to report these serious incidents and I think cyber security makes sense in a way, right? If, especially if you're talking about very powerful model deployed in, in the whole economy. So we really tried hard to disentangle whose responsibility is what.

So depending on where exactly one is on the value chain, the way to the final police officer, who can make a decision one way or another, there is a series of responsibilities covering, I think, most potential configurations.

Adel Nehme: you know, I think we can definitely create an episode just focused on that EU value, the AI value chain, and how does the EU AI think about it? But I want to switch gears, maybe discuss a bit the organizational requirements for the UAI Act. I think a lot of organizations now, okay, the UAI Act came into force in August.

We need to think about how to become compliant. We need to audit our own AI systems, our own machine learning systems, and think about our vendors, what we need to do. So, from a high level, I know it's a 150 page document, but what are the main requirements that organizations need to be aware of today?

And maybe how do they begin preparing to meet these obligations?

Dan Nechita: They would look through it as a, you know, a decision tree almost because it does, while it is a stuffy document, it does have a very sound logic into who and what kind of obligations they have. So the first one is to enter into the scope of the regulation by looking at the definition of artificial intelligence.

that basically defines what is in scope and what is not in scope of the regulation. So that definition in a sense excludes first generation, you know, AI that you might encounter, for example, in video games, which is not really AI. So looking first at that definition and the scope of the regulation and understanding whether or not they are in scope, that's very broad.

You know, you can be no risk, but still in scope because of the definition. Then the second point. The second step is to basically do an inventory of the AI systems because it's very easy to think, okay, well, one company, one AI system, usually AI companies have multiple systems interacting with one another.

relying on one another or basically doing different tasks to achieve you know, one objective. So an inventory of the AI system in use and their intended purpose would be the next step. Then the third step would be to look at really the discrete list of high risk, because that's where the most obligations are.

And to check whether or not any of their AI systems falls into one of those use cases. And these are, while I say discrete, they are not discrete to the level of explaining every single potential use case, but they're rather discrete categories with discrete use cases, but that could encompass a wide range of potential, individual applications.

So looking at that list, then. this is a risk based regulation. So we tried to narrow down even those categories to make sure that we don't inadvertently burden companies with regulatory obligations if they do not pose a risk to health, safety, and fundamental rights. So there are a number of exemptions, even for those who fall in that category so for example, if the AI system is used in preparatory or, literally a small task in that particular category, but it doesn't, necessarily is not the main deciding factor, and there might be an exemption for that.

And then finally after determining, okay am I in the high risk category? I'm assuming nobody is, as of now, building privately uh, mass surveillance AI to discriminate against the entire population of the union. So I, I, you know, I'm not going to spend much time on the prohibitions but you know, based on where that system falls, whether it's a high risk or it's a You know, limited risk, and it's very, very clear, you know, it's deepfakes, it's chatbots, and it's AI generated content So, depending on that then that's fairly simple to go to the, very concrete obligations.

So, you know, it's a decision tree getting to exactly where each system falls Then, of course, the same kind of logic also applies for those who deploy those systems going through that logic and understanding is this AI system that I'm deploying on the market in the high risk category, then as a deployer, what are my obligations here as the market moves more and more especially the, startup world and so on are building models That are general purpose and for those it's also a decision tree.

First one is, is my model above 1 billion parameters? That's like an entry point to approximate what would qualify as a general purpose AI model. If not, then no obligations here probably apply. Though, of course there's always some interpretation, especially in the gray areas around that particular number.

Then if so moving on, is my model potentially a model with systemic risk? And that probably applies to the future models that the top five, ten companies building AI will develop. And I think it applies to a lot of those.

Adel Nehme: And maybe you could have a small model that also poses systemic risk if it's really good enough from an intelligence perspective.

Dan Nechita: Indeed. Indeed. And will go back to what I was saying about the complexity of this. We have foreseen that as well. So, recall that the AI office interacts with the market to supervise those models who could pose systemic risks. And indeed, Indeed. The AI office has a certain discretion based on data and on a number of, parameters or conditions that we put in the regulation to designate a model as posing a systemic risk because even the, the different outputs of the models present a risk for, for smaller models as well.

Or maybe, you know, I'm theorizing here, but it smaller models that might not be as good and as polished could pose even bigger systemic risk. But then on the other hand, are they going to be all across the European market? So going, through that logic and understanding where your model fits, you know, first checkpoint is the number of parameters, which which kind of hints at whether or not this is a general purpose, a model.

And then at the very end, at the frontier when we're talking about systemic risk, then. But that's of concern to only a few.

Adel Nehme: Maybe, there's quite a few different points across the decision tree that you mentioned, right? let's switch seats here for Dan. You're leading AI or IT or information security at an organization today that does operate in the EU. What would be the first thing that you would look at?

Dan Nechita: So I'm leading it. I'm building something. I'm building an AI.

Adel Nehme: let's say you're a large pharmaceutical company or something along those lines using AI within your operations, right? Like, what's the first thing that you would look at?

Dan Nechita: I think it's the same logic as before, but I will take the opportunity to go into, you know, medical devices and other, products that are regulated separately by European law, but also intersect with AI. So I would follow the same logic, see if I'm using AI by the definition, see if it's high risk.

See if it's, really a key component in those use cases, and then I would follow the rules there or if I'm deploying it again, seeing what, the obligations are for deployers, I will take the opportunity to talk a little bit about the intersection with other products that have AI embedded in them, because we also took that into account in there You have products such as medical devices, for example, that have ai, in them for which you have two different distinct universes.

One, the AI as a standalone system that is embedded somehow into, another product. The rules of standalone systems apply everything that we've discussed so far. On the other hand, if the AI comes as part of that product as a safety component, that is also classified as high risk. So. Then the rules here also apply but, and there is a big but because it's trying to not overburden the market with regulation.

Most of those products that are already regulated they're regulated because they could pose threats to health or safety. And therefore they have rules of their own to prevent that. They have conformity assessments of their own. Sometimes these conformity assessments are even more stringent than the ones in the AI Act.

So we have tried throughout to make sure that these work hand in hand. They're complementary, not duplicating the type of responsibilities and obligations one has. So let's take medical devices in the AI Act will apply in so far as the obligations herein are not already covered in the current rules that apply to medical devices.

What that means is it could be that most of them are already covered because they were foreseen. What I think the AI act brings a little bit more outside the product safety regulation is this component of fundamental rights. So while health and safety has been, a priority in terms of European regulation, technology and fundamental rights, until we, we dealt with, with AI was You know, it's an emergent question because before, you know, a linear algorithm, you know exactly what it's going to output.

You can't expect it to discriminate. You can look at the code and see exactly what, what it does. So I do expect that, even those who are already regulated will have additional set of obligations derived from the AI Act. And we had foreseen that, but we have tried to simplify the process as much as possible so that it's not, duplicating the obligations, but rather making sure that they're comprehensive.

Adel Nehme: Yeah, love how there's complementarity between different regulations depending on the area, because you mentioned, for example, With limited risk systems, something like DeepFakes could fit in there. But if you're using DeepFakes to defame someone or libel someone, libel law will kick in here.

Dan Nechita: Yeah, exactly. Yeah, yeah, yeah.

Adel Nehme: Perfect. So maybe one last question when it comes to requirements, And this is still a bit loosely defined. If you can explain what the AI literacy requirements the organizations have as part of the EUA Act. To give here innocent background, the EUA Act emphasizes the organizations need to have AI literacy.

I'd love if you can expand on what that means. means, and what does that look like maybe in practical terms?

Dan Nechita: Definitely. so that provision is I think that one of the provisions that are broader, you know, goes a little bit outside of the, risk based approach that we've been discussing so far. And I think that, It is a, a soft obligation. It's not a hard obligation. it's a best effort type of obligation for organizations using AI to make sure that, the people operating that AI and deploying it are, fairly well prepared to understand it.

You know, what the impact of using that AI system is. So it is less about understanding, let's say, machine learning, but rather having the literacy to understand that it could go wrong, that it could make biased decisions that it could discriminate and that there is a natural tendency to be over reliant on machine outputs so that kind of, literacy, of course, it's not really a pyramid.

It's a gradient that goes from zero to fully prohibited, right? But it, there's a whole continuum of applications. And I would say. The closer you get to a gray area where your lawyers guide you through the regulation and say, this does not apply to you. However, you feel that you're very close.

The more this requirement then is very important to say okay, well, look, at least I'm preparing for, any risks that me deploying this will have. I think in terms of very concrete it will be something that, the commission together with the AI board will, provide guidance on in the near future.

And there's many parts where the commission still needs to provide guidance to make it a lot more specific. But look, if I were to put it in one sentence is make sure that, you're not posing threats to health, safety, fundamental rights. And how do you do that is by training the people who operate the system to be aware of what could go wrong.

Even if you technically and by, the letter of the law, you don't fall into one of those categories that we have there.

Adel Nehme: Yeah, and you mentioned something that the commission is still providing guidance on a few elements of the UAI Act to make it a bit more concrete. And I think this segues into my next question, which is, what does the future of the UAI Act look like? Right? We've covered the present implications of the UAI Act.

It's also important to look at what's next. So how do you see the EU AI Act evolving? What are the next steps in terms of regulation? What timelines? I'd love to kind of get from you how you see the regulation evolving.

Dan Nechita: I'll split this into two. One is the evolution of the regulation, which I'll leave for later. And the second is what's actually in the regulation, that is, The entry into application, so the regulation as of now has a very phased and I think logical entry into application to give the market time to prepare and the member states to prepare their national companies, authorities and so on.

But the first term is within the next six months. So starting August 1st, six months after the entry into force where the prohibitions apply. Thank you very much. And that again, it's very simple, you know, you can't do this in the European Union and you should not do it. There is no question about why this had to come first.

Then within year after, you know, take August 1st as the, starting point, within a year, the rules on general purpose models apply and on general purpose models with systemic risk And also the artificial intelligence office needs to be fully functional because as I said, you know, it's a, it's a dynamic interaction between those who build general purpose models and, the supervisory tasks of the office.

Within two years all of the rules and basically the bulk of the regulation applies to everything that, comes in, compliance for. high risk AI systems apply within two years. And then within three years, recall, we were discussing overlaps. You have these safety components and products that are already regulated and disentangling that and adapting the compliance requirements to also fit with the AI Act will take a little bit more time.

So those come into effect within three years. So that's really what's in there now in terms of your other part of the question on, I would call it the future proofness of how will the regulation evolve? There are mechanisms through which the commission can amend through a simplified procedure called comitology.

You don't want to get into that. But through which the commission is empowered to change certain parts of the text depending on new evolution. So we have empowered the commission. to modify the list of high risk use cases based on very specific criteria. So they can just wake up all of a sudden and say, you know, Dan's tomato growing AI is now high risk.

They have to follow, a risk check and to see whether exactly this new category belongs in that list, but they can add certain categories there. I think then there's over the next year, the commission needs to issue a lot of very concrete guidance on, going from the law to the very specifics, in, margin cases with what is considered high risk and what isn't to provide clarity on the definition, to provide clarity on a lot of other parts of, of the text.

Member states also need to set up institutions that apply the AI act and that enforce the rules. So there is a distinction. Most of the rules will be enforced by member states. So everything dealing with high risk systems. will be enforced at the member state level. So member states need to have a national supervisory authority in charge of this.

And then of course, as we were discussing governing more powerful AI will be done at the European level. So I think that there's still a few moving parts. But I think that there is enough guidance in the regulation itself to understand where this is going. So the difference will be.

You know, between two companies, one falling under one, not falling under at the margin. But even so, you know, being compliant with, many of the requirements here is first good practice. And second, you know, it gives everybody a competitive advantage saying, you know, even globally saying, okay, my product is compliant with this regulation.

Adel Nehme: No, I appreciate the education here. I feel much more empowered to talk EU law with my EU bubble friends in Brussels. Uh, But maybe a couple of last questions while we still have you here, Tan. I want to understand maybe the process of creating the UAI Act. This is quite an interesting experience to say the least.

It's been years in the making and you're leading the effort. You're negotiating with member states, with organizations, technology providers, regulatory bodies. Can you share some insights maybe on the challenges and successes you've encountered along the way? What's the biggest lesson you've learned?

Maybe building one of the biggest legislation in technology history. So yeah, I'd love to understand here.

Dan Nechita: Well, what a, what a long answer I have, but I'll, I'll maybe I'll spend, I'll spend a minute explaining how this comes into being because I think it's it is useful that maybe I can share some of the challenges as well. So at the European. level, the European Commission sort of imagine it like a government of, of the European Union.

Don't, directly quote me on that because there's a distinction. But the European Commission is the one that proposes regulation. However, to sign on to the regulation, it is really the parliament and the council who are basically the co legislators. The European commission proposes a regulation.

It goes to parliament and to council, then parliament and council. Parliament is member states, representatives elected democratically directly and sent to the European parliament and council is where the member states are directly represented as member states, parliament separately from council negotiates internally.

The changes it wants to bring to a certain piece of regulation. Council does the same. So now you have three versions. One that was initially proposed, one of the parliament, one of the council. When this process is done, the three institutions meet together and the co legislators, that it's parliament and council, try to come to an agreement on the final text.

Of course, The commission acts as an honest broker, of course, a very vested honest broker since they have proposed the regulation in the first place. Because of the political weight and visibility and the impact this was a very, very challenging negotiation, I would say, very politically charged and very important as well.

So, in Parliament, there were over I think 30 members of parliament working on this negotiation and in council, you had a longer time. I'm not going to get into rotating presidency, but you had a longer time that the council position also changed a few times before it became, it became final.

Then the whole negotiation process was. I would say complicated, but in a good way by the advent of generative AI, which was not foreseen in the original proposal and the need to come up with some, as I was saying, flexible rules for those as well. And Parliament here, you know, very proud that we took the lead and we actually crafted rules that make sense and there are future proof.

We put them into the mix and negotiated those as well. So that's the process. I think in terms of the challenges one, it was the politics dealing with very sensitive topics like remote biometric identification in public spaces where you have very strong political ideologies on the left and on the right.

On how exactly that technology should or should not be used and deployed. On the definition, where, you know, it really determines the scope of the regulation. And also the compatibility of the regulation internationally. So we, we've done a lot of work with organizations like the OECD who had already a definition of AI to align both our definition, but also theirs.

So in the end, you know, we, we ended up very close in terms of the definition, also working with our partners, with the example where they have. A fairly similar definition that they're using in the NIST National Institute for Standards and Technology. So that was, a big part of defining politically what is it that we're talking about.

 Then the whole discussion on, on copyright, because copyright is treated different in, in, in a different part of, of EU law. But on the other hand, there was something that we needed to do in this text as well. And we landed on a compromise solution, which I think is fair to add some transparency requirements.

That is, transparency is an obligation. However, it's not something that, alters your costs, but it alters your incentives. in breaking copyright law or not, because if you're more transparent, then you can also be challenged on the way you've used copyrighted material. So I think Negotiating these, you know, from different viewpoints, one in the parliament where you have very ideological viewpoints, in the council where you have member states really thinking about you know, the competitiveness of, of their own member state in a sense.

And the commission trying to, to keep up, and to, to sort of also accept an update of their initial proposal was quite a challenge, but it was fun. It was, it was a very good uh, very good experience, very challenging, but I think we eventually ended up with the.

very good piece of legislation. You can see that from the fact that it's started to be copied all over the world now, and I'm sure it's going to have a tremendous impact around the world as more and more. States with different regulatory traditions will look into, okay, what, the rules for AI?

And we have a fairly good set of starting rules with, that I think makes sense. There is only so many ways in which you can prevent risks from AI, and there's only so many ways in which. You can categorize AI. Is this risky or not? I mean, eventually, you know, you don't want to over regulate and regulate my tomato AI. Uh, But you don't want to leave outside, you know, those. So, it will have and it does already have a big impact around the globe.

Adel Nehme: I think that like the, the juice was definitely worth the squeeze in this context.

Dan Nechita: Exactly.

Adel Nehme: Yeah, yeah. Dan, it was great to have you on DataFramed. Really appreciate you joining us.

Topics
Related

blog

What is the EU AI Act? A Summary Guide for Leaders

Understand the essentials of the EU AI Act in this summary guide for leaders.

Austin Chia

12 min

blog

AI Regulation: Understanding Global Policies and Their Impact on Business

Learn how AI regulation is evolving globally, the challenges of aligning policies, and what businesses need to know to stay compliant in the USA, EU, and beyond.
Sejal Jaiswal's photo

Sejal Jaiswal

9 min

podcast

Why AI is Eating the World with Daniel Jeffries, Managing Director at AI Infrastructure Alliance

Adel and Daniel discuss how to define ambient AI, how our relationship with work will evolve, what the AI ecosystem is missing to rapidly scale adoption, how AI existential risk discourse takes away focus from real AI risk, and a lot lot more.
Adel Nehme's photo

Adel Nehme

59 min

podcast

How the UN is Driving Global AI Governance with Ian Bremmer and Jimena Viveros, Members of the UN AI Advisory Board

Richie, Ian and Jimena explore what the UN's AI Advisory Body was set up for, the opportunities and risks of AI, how AI impacts global inequality, key principles of AI governance, the future of AI in politics and global society, and much more. 
Richie Cotton's photo

Richie Cotton

41 min

podcast

Building Trustworthy AI with Beena Ammanath, Global Head of the Deloitte AI Institute

Beena and Adel cover the core principles of trustworthy AI, the interplay of ethics and AI in various industries, how to make trustworthy AI practical, the importance of AI literacy when promoting responsible and trustworthy AI, and a lot more.
Adel Nehme's photo

Adel Nehme

38 min

tutorial

Ethics in Generative AI

Understand the transformative potential of generative AI and the necessary awareness of the risks associated with it.
Vidhi Chugh's photo

Vidhi Chugh

9 min

See MoreSee More