Skip to main content
HomeBlogArtificial Intelligence (AI)

ChatGPT and The Future of AI Regulations

Governments around the world are considering new AI regulations to tackle the potential dangers of next-generation AI tools like ChatGPT
Apr 2023  · 8 min read

An AI stands in a court of law

The release of ChatGPT in late 2022 marked an unprecedented milestone in the development of AI. Developed by Microsoft and OpenAI, ChatGPT is a next-generation chatbot that can create all kinds of human-like writings in a matter of seconds. While the most popular, ChatGPT is only one of the many generative AI tools, a subfield of AI that focuses on the creation of content, such as text, images, video, and sound.

Generative AI tools like ChatGPT are set to disrupt nearly every industry and sector. However, the ongoing AI revolution comes not only with benefits, but also societal, economic, and environmental risks that have to be assessed carefully. 

Businesses and governments are thinking of ways to regulate AI. For example, in order to get time to set safety standards and mitigate potential dangers, tech leaders have recently called for a six-month pause on the development of powerful AI tools like ChatGPT. On the public side, governments around the world are proposing measures and AI regulations to ensure that AI tools reach society in a safe, transparent, and accountable manner.    

In this post, we will assess the importance of having legal standards to regulate AI, what will be the impact of the proposed AI legislation on popular AI tools like ChatGPT, and how the existence of AI regulations will shape the future development of AI.

Why are AI Regulations Important?

Cutting-edge technologies like AI have the potential to deeply transform the way we live. The AI revolution is in its infancy, and it’s too early to know what the society of tomorrow will look like. But to ensure that AI only delivers positive outcomes, society needs the right tools to control and steer its development. Here is where AI regulation plays an important role.

Below you can find a list of some compelling reasons to regulate AI:

  • AI can significantly affect fundamental rights. Governments and corporations use AI to make decisions that can have a significant impact on our life, with serious implications for fundamental rights, such as the right to life, honor, privacy, and data protection.
  • AI can exacerbate discrimination. When AI tools present bias, they often result in decisions that discriminate against minority groups.
  • AI can increase social disruption. AI has been used to create fake news and spread misinformation, leading to social unrest and polarization.
  • AI needs to be accountable. Actors involved in the development and use of AI should be accountable for the proper use of AI. A set of rules and standards are required to ensure AI safety and trust.
  • AI needs to be transparent. Algorithmic opacity is one of the main concerns associated with AI. AI regulation could provide for transparency measures required to audit them and better understand the costs and impacts of AI tools like ChatGPT.

The Current State of AI Regulation

Following the release of ChatGPT and other powerful generative AI tools, governments are starting to take action to regulate AI. 

Here is a list of the most recent developments in AI regulation across the globe:

  • US. The government recently announced a technical inquiry to assess whether regulatory measures should be placed on AI tools like ChatGPT. It remains to be seen how this potential regulation relates to the so-called Algorithmic Accountability Act, proposed in 2022, and currently in the draft stage. In the meantime, the US has already regulated AI partly following the approval of the AI Bill of Rights and the Initiative on AI and Algorithmic Fairness
  • European Union. Proposed in 2021 and currently under negotiation, the EU AI Act is the first attempt to regulate AI by a major player. It proposes a risk-based approach to AI, where certain AI are banned, while AI deemed high-risk will need to meet certain requirements to enter the market. Following the recent boom in generative AI, new revisions in the draft proposal are expected. 
  • Canada. The country is currently debating its own AI act, called the Artificial Intelligence and Data Act, which follows a similar risk-based approach as its European counterpart. New revisions are expected to address the impact of generative AI.
  • China. In a recent move, China has proposed rules to control AI tools similar to ChatGPT developed in the country. The proposed rules will make companies responsible for the content generated by the tools, which will have to comply with certain standards before being rolled out.

Implications of AI Regulations on ChatGPT

Despite the good reasons to regulate AI and the recent initiatives provided in the previous section, regulating cutting-edge technologies like AI is a challenging task, for governments need to find a way to protect society from unexpected risks and pitfalls while ensuring innovation and access to the benefits these technologies offer. This is the reason why AI remains for now highly unregulated. 

Since there is currently no AI legislation in place, major attempts to regulate it have been made through other existing laws, such as data protection regulations. For example, Italy has temporarily banned ChatGPT over potential violations of the EU Data Protection Regulation. Other European countries, such as France, Spain, and Germany, are awaiting to take similar actions  until the issue is discussed at the European level.

Incoming AI regulation will likely address some of the concerns that have arisen following the release of ChatGPT, namely:

  • Processing of confidential information.  ChatGPT uses data provided by users, including sensitive data, to train its models. Privacy experts have warned that this policy can lead to leakages of personal data or confidential information. For example, Samsung workers recently discovered that they had accidentally leaked confidential code to ChatGPT.  Incoming AI regulation could ensure that the processing of sensitive data doesn’t breach data protection laws.
  • Unmoderated content. While ChatGPT is equipped with some content filters, they can be bypassed with the right prompts. Also, there are concerns that vulnerable users, like children, may be exposed to inappropriate language and adult topics. AI regulation could establish clear safeguards to tackle these issues.
  • Source of ChatGPT outputs. Currently, ChatGPT doesn’t provide the sources used to generate answers. This may make it difficult for users to know if the information provided is true and accurate, thereby increasing the risk of spreading misinformation.
  • Plagiarism and copyright. There are various legal issues concerning the attribution of the data used to train ChatGPT and the ownership of the content it generates. There is no precedent in the fields of copyright and intellectual property. Future AI regulation is needed to clarify these points.
  • Where to use ChatGPT. The possibilities of ChatGPT are endless, but its uses in certain scenarios can lead to negative outcomes. For example, there is a heated conversation on the implications of ChatGPT for. What if students use it to answer exams or write assignments? Should teachers use ChatGPT in class? These remain open questions, but it’s likely that future AI laws will regulate some of these issues. 

Conclusion

The tech companies behind tools like ChatGPT have taken the world by surprise. They are the forerunners of the  AI revolution, but it’s for governments to steer it and ensure that it delivers positive outcomes. Legislators around the world are starting to take action with ambitious AI regulations. It’s important to keep an eye on these legal developments, as the way we approach AI today will define the world we live in tomorrow.

In the meantime, we highly recommend you stay tuned with ChatGPT and the magic behind it with the following DataCamp materials:

Topics
Related

You’re invited! Join us for Radar: AI Edition

Join us for two days of events sharing best practices from thought leaders in the AI space
DataCamp Team's photo

DataCamp Team

2 min

The Art of Prompt Engineering with Alex Banks, Founder and Educator, Sunday Signal

Alex and Adel cover Alex’s journey into AI and what led him to create Sunday Signal, the potential of AI, prompt engineering at its most basic level, chain of thought prompting, the future of LLMs and much more.
Adel Nehme's photo

Adel Nehme

44 min

The Future of Programming with Kyle Daigle, COO at GitHub

Adel and Kyle explore Kyle’s journey into development and AI, how he became the COO at GitHub, GitHub’s approach to AI, the impact of CoPilot on software development and much more.
Adel Nehme's photo

Adel Nehme

48 min

A Comprehensive Guide to Working with the Mistral Large Model

A detailed tutorial on the functionalities, comparisons, and practical applications of the Mistral Large Model.
Josep Ferrer's photo

Josep Ferrer

12 min

Serving an LLM Application as an API Endpoint using FastAPI in Python

Unlock the power of Large Language Models (LLMs) in your applications with our latest blog on "Serving LLM Application as an API Endpoint Using FastAPI in Python." LLMs like GPT, Claude, and LLaMA are revolutionizing chatbots, content creation, and many more use-cases. Discover how APIs act as crucial bridges, enabling seamless integration of sophisticated language understanding and generation features into your projects.
Moez Ali's photo

Moez Ali

How to Improve RAG Performance: 5 Key Techniques with Examples

Explore different approaches to enhance RAG systems: Chunking, Reranking, and Query Transformations.
Eugenia Anello's photo

Eugenia Anello

See MoreSee More