Skip to main content
HomeBlogArtificial Intelligence (AI)

ChatGPT and The Future of AI Regulations

Governments around the world are considering new AI regulations to tackle the potential dangers of next-generation AI tools like ChatGPT
Apr 2023  · 8 min read

An AI stands in a court of law

The release of ChatGPT in late 2022 marked an unprecedented milestone in the development of AI. Developed by Microsoft and OpenAI, ChatGPT is a next-generation chatbot that can create all kinds of human-like writings in a matter of seconds. While the most popular, ChatGPT is only one of the many generative AI tools, a subfield of AI that focuses on the creation of content, such as text, images, video, and sound.

Generative AI tools like ChatGPT are set to disrupt nearly every industry and sector. However, the ongoing AI revolution comes not only with benefits, but also societal, economic, and environmental risks that have to be assessed carefully. 

Businesses and governments are thinking of ways to regulate AI. For example, in order to get time to set safety standards and mitigate potential dangers, tech leaders have recently called for a six-month pause on the development of powerful AI tools like ChatGPT. On the public side, governments around the world are proposing measures and AI regulations to ensure that AI tools reach society in a safe, transparent, and accountable manner.    

In this post, we will assess the importance of having legal standards to regulate AI, what will be the impact of the proposed AI legislation on popular AI tools like ChatGPT, and how the existence of AI regulations will shape the future development of AI.

Why are AI Regulations Important?

Cutting-edge technologies like AI have the potential to deeply transform the way we live. The AI revolution is in its infancy, and it’s too early to know what the society of tomorrow will look like. But to ensure that AI only delivers positive outcomes, society needs the right tools to control and steer its development. Here is where AI regulation plays an important role.

Below you can find a list of some compelling reasons to regulate AI:

  • AI can significantly affect fundamental rights. Governments and corporations use AI to make decisions that can have a significant impact on our life, with serious implications for fundamental rights, such as the right to life, honor, privacy, and data protection.
  • AI can exacerbate discrimination. When AI tools present bias, they often result in decisions that discriminate against minority groups.
  • AI can increase social disruption. AI has been used to create fake news and spread misinformation, leading to social unrest and polarization.
  • AI needs to be accountable. Actors involved in the development and use of AI should be accountable for the proper use of AI. A set of rules and standards are required to ensure AI safety and trust.
  • AI needs to be transparent. Algorithmic opacity is one of the main concerns associated with AI. AI regulation could provide for transparency measures required to audit them and better understand the costs and impacts of AI tools like ChatGPT.

The Current State of AI Regulation

Following the release of ChatGPT and other powerful generative AI tools, governments are starting to take action to regulate AI. 

Here is a list of the most recent developments in AI regulation across the globe:

  • US. The government recently announced a technical inquiry to assess whether regulatory measures should be placed on AI tools like ChatGPT. It remains to be seen how this potential regulation relates to the so-called Algorithmic Accountability Act, proposed in 2022, and currently in the draft stage. In the meantime, the US has already regulated AI partly following the approval of the AI Bill of Rights and the Initiative on AI and Algorithmic Fairness
  • European Union. Proposed in 2021 and currently under negotiation, the EU AI Act is the first attempt to regulate AI by a major player. It proposes a risk-based approach to AI, where certain AI are banned, while AI deemed high-risk will need to meet certain requirements to enter the market. Following the recent boom in generative AI, new revisions in the draft proposal are expected. 
  • Canada. The country is currently debating its own AI act, called the Artificial Intelligence and Data Act, which follows a similar risk-based approach as its European counterpart. New revisions are expected to address the impact of generative AI.
  • China. In a recent move, China has proposed rules to control AI tools similar to ChatGPT developed in the country. The proposed rules will make companies responsible for the content generated by the tools, which will have to comply with certain standards before being rolled out.

Implications of AI Regulations on ChatGPT

Despite the good reasons to regulate AI and the recent initiatives provided in the previous section, regulating cutting-edge technologies like AI is a challenging task, for governments need to find a way to protect society from unexpected risks and pitfalls while ensuring innovation and access to the benefits these technologies offer. This is the reason why AI remains for now highly unregulated. 

Since there is currently no AI legislation in place, major attempts to regulate it have been made through other existing laws, such as data protection regulations. For example, Italy has temporarily banned ChatGPT over potential violations of the EU Data Protection Regulation. Other European countries, such as France, Spain, and Germany, are awaiting to take similar actions  until the issue is discussed at the European level.

Incoming AI regulation will likely address some of the concerns that have arisen following the release of ChatGPT, namely:

  • Processing of confidential information.  ChatGPT uses data provided by users, including sensitive data, to train its models. Privacy experts have warned that this policy can lead to leakages of personal data or confidential information. For example, Samsung workers recently discovered that they had accidentally leaked confidential code to ChatGPT.  Incoming AI regulation could ensure that the processing of sensitive data doesn’t breach data protection laws.
  • Unmoderated content. While ChatGPT is equipped with some content filters, they can be bypassed with the right prompts. Also, there are concerns that vulnerable users, like children, may be exposed to inappropriate language and adult topics. AI regulation could establish clear safeguards to tackle these issues.
  • Source of ChatGPT outputs. Currently, ChatGPT doesn’t provide the sources used to generate answers. This may make it difficult for users to know if the information provided is true and accurate, thereby increasing the risk of spreading misinformation.
  • Plagiarism and copyright. There are various legal issues concerning the attribution of the data used to train ChatGPT and the ownership of the content it generates. There is no precedent in the fields of copyright and intellectual property. Future AI regulation is needed to clarify these points.
  • Where to use ChatGPT. The possibilities of ChatGPT are endless, but its uses in certain scenarios can lead to negative outcomes. For example, there is a heated conversation on the implications of ChatGPT for. What if students use it to answer exams or write assignments? Should teachers use ChatGPT in class? These remain open questions, but it’s likely that future AI laws will regulate some of these issues. 

Conclusion

The tech companies behind tools like ChatGPT have taken the world by surprise. They are the forerunners of the  AI revolution, but it’s for governments to steer it and ensure that it delivers positive outcomes. Legislators around the world are starting to take action with ambitious AI regulations. It’s important to keep an eye on these legal developments, as the way we approach AI today will define the world we live in tomorrow.

In the meantime, we highly recommend you stay tuned with ChatGPT and the magic behind it with the following DataCamp materials:

Topics
Related

What is DeepMind AlphaGeometry?

Discover AphaGeometry, an innovative AI model with unprecedented performance to solve geometry problems.
Javier Canales Luna's photo

Javier Canales Luna

8 min

What is Stable Code 3B?

Discover everything you need to know about Stable Code 3B, the latest product of Stability AI, specifically designed for accurate and responsive coding.
Javier Canales Luna's photo

Javier Canales Luna

11 min

The 11 Best AI Coding Assistants in 2024

Explore the best coding assistants, including open-source, free, and commercial tools that can enhance your development experience.
Abid Ali Awan's photo

Abid Ali Awan

8 min

How the UN is Driving Global AI Governance with Ian Bremmer and Jimena Viveros, Members of the UN AI Advisory Board

Richie, Ian and Jimena explore what the UN's AI Advisory Body was set up for, the opportunities and risks of AI, how AI impacts global inequality, key principles of AI governance, the future of AI in politics and global society, and much more. 
Richie Cotton's photo

Richie Cotton

41 min

The Power of Vector Databases and Semantic Search with Elan Dekel, VP of Product at Pinecone

RIchie and Elan explore LLMs, vector databases and the best use-cases for them, semantic search, the tech stack for AI applications, emerging roles within the AI space, the future of vector databases and AI, and much more.  
Richie Cotton's photo

Richie Cotton

36 min

Getting Started with Claude 3 and the Claude 3 API

Learn about the Claude 3 models, detailed performance benchmarks, and how to access them. Additionally, discover the new Claude 3 Python API for generating text, accessing vision capabilities, and streaming.
Abid Ali Awan's photo

Abid Ali Awan

See MoreSee More