AI Regulations Around the World: A Comprehensive Overview of Global Policies

AI applications are being deployed across various industries, from healthcare and insurance, financial services, and banking to transportation and media. These applications are certainly changing the way we live and work. However, these developments have also brought significant difficulties and dangers that necessitate the need for AI regulations.

Today, the need for strong, principled, and adaptable systems of regulation is even more pressing today as countries everywhere try to work out how best to govern AI technology. This post will discuss how various countries around the world have set their AI regulations.

Understanding the Importance of AI Regulations

AI regulations ensure that people can trust what AI has to offer. It is no surprise that AI helps you solve and simplify complex tasks, but it may also create undesirable outcomes. For example, AI has simplified candidate screening during the hiring process by taking over the candidate shortlisting process. However, it is not always possible to find out why it rejected certain candidates. This entire process could be biased as AI solely works on the data it is being trained on, and you could miss out on outstanding talent. This is one reason why AI regulations are important in current times.

Similarly, impersonation or deepfake AI is another new challenge that most of us are starting to experience. There are many videos of celebrities or politicians that are circulated on social media promoting fake speeches. Social surveillance is another form of intrusion AI uses to observe and track your actions on various social media platforms. AI fraud, leveraging sophisticated algorithms to deceive and manipulate, underscores the critical need for robust AI regulations to protect individuals and maintain trust in technology. AI can be a potent weapon if it reaches the wrong hands and puts the entire world at risk.

Implementing AI regulations will help in -

  • Addressing the bias, compliance, or any other types of risks AI applications create.
  • Setting clear requirements or instructions for AI systems.
  • Prohibiting AI practices that could create problems or risks in the future.
  • Determining high-risk applications and taking necessary actions accordingly.
  • Establishing a governance structure at national and international levels.

That being said, let us take a look at the bigger picture by understanding how various countries around the world are striking a balance between AI regulations and technology innovation.

AI Regulations Across Different Countries

Ai regulations around the world

1.The European Union (EU)

To ensure the thoughtful and safe use of AI in the competitive European market, the European Commission established the EU AI Act, which lists regulations and rules for AI systems. This act aims to set stringent guidelines for collecting, using, and storing personal data. It offers a standardized definition of an AI system and shares provisions that protect individuals' rights with respect to the use of AI systems.

This AI Act complements the GDPR regulations and offers the EU significant control over AI development and usage. It promotes innovation while protecting EU citizens. The publication of this act is expected to go live this year. It will offer all organizations dealing with AI technology a transitional period of about 24 months to implement the requirements. The failure to do so may lead to significant fines and liability risks.

The AI systems will be categorized into different categories based on the risks they pose - Unacceptable Risk, High Risk, Limited risk, and Minimal risk AI systems. While some systems will be completely prohibited, some will be allowed to operate initially without any restrictions. For instance, high-risk AI systems will be required to meet the comprehensive documentation, system monitoring, and other quality requirements to maintain transparency. Depending on which category it belongs to, there will be different legal provisions, obligations, and penalties for the said AI systems.

2. The United States of America

The White House Office of Science and Technology Policy developed and published a blueprint for the development, deployment, and use of automated systems in October 2022. Unlike the EU's AI Act, this blueprint for an AI Bill of Rights is non-binding and lists five principles that need to be followed to minimize the potential harm to people's rights and opportunities from AI systems. These five principles were aimed at protecting US citizens as AI continues to grow in terms of capabilities and functionalities.

  • Data privacy - To protect consumers from abusive data practices.
  • Notice - To help consumers understand how an automated system is being used, if any, and how it impacts them.
  • Algorithmic discrimination protection - The system must be designed and used in a way that the consumers do not face any discrimination due to the algorithms.
  • Safety - To understand how consumers are protected from unsafe and ineffective systems.
  • Alternatives and fallback options - To ensure that the consumers can opt-out when required and have access to a human support person for query resolution.

Another purpose of this blueprint was to form a framework for relevant federal agencies to regulate emerging AI technologies along with the ethical and legal issues concerning the same.

Besides this blueprint, the National Institute of Standards and Technology (NIST) of the US has published a second iteration of the AI Risk Management Framework that helps companies who are into developing or deploying AI systems to assess and manage the risks associated with their systems. This framework enlists voluntary guidelines and recommendations that must be followed to promote responsible development and use of AI systems.

3. United Kingdom

The UK government developed a principles-based framework that organizations need to interpret and apply within their specific domains. The AI Regulation White Paper (the White Paper) was released to announce this proposed regulatory framework. This regulatory framework focused on five principles, namely -

  • Safety, security, and robustness
  • Fairness
  • Transparency and explainability
  • Accountability and governance
  • Contestability and redress

The Department of Science, Innovation, and Technology of the country also introduced another bill named the Data Protection and Digital Information (No 2) Bill. This bill addressed the risks associated with AI-powered systems and determined the data protection strategy and control in such cases. The main purpose of this principles-based framework was to balance innovation and safety while assessing the challenges and risks associated with AI and regulatory gaps.

The UK government has already set up a central function for monitoring and assessing AI risks across the country. The role of this central function is to conduct targeted consultations on AI risks with the help of regulator representatives. The team will also assess the regulatory framework time and again. Further, the government has also launched a digital hub in partnership with the Digital Regulation Cooperation Forum. This digital hub brings together the CMA, the ICO, Ofcom, and the country's Financial Conduct Authority for informed decision-making.

4. India

There are no regulations or specific laws for AI systems in India right now. However, the country is taking initial steps towards AI regulations with the help of the Ministry of Electronics and Information Technology (MeitY). MeitY formed advisory bodies to instruct organizations to obtain permission from MeitY before implementing any risky AI models. For example, generative AI software, Large Language Model algorithms, or any other AI tools.

Although India does not have any dedicated AI regulations, it has set a series of initiatives and guidelines that promote responsible AI development and deployment. For example, AI systems must not facilitate bias or discrimination of any kind. Similarly, it is mandatory to label all AI-generated media content and text with unique identifiers to ensure easy identification.

Below are a few of the Indian government's initiatives:

  • In 2018, Niti Ayog launched #AIFORALL, the first-ever national AI strategy. According to this strategy,  the organization identified critical areas of AI innovation in the country. For example, healthcare, agriculture, education, transportation, and smart cities.
  • In 2021, Niti Ayog, in continuation with the National AI Strategy, drafted another document outlining the principles of responsible AI. This document shared seven principles - safety and reliability, equality, inclusivity and non-discrimination, security and privacy, accountability, transparency, and protection of positive human values; for the responsible use of AI systems.
  • In 2023, the President of India introduced the Digital Personal Data Protection Act (DPDP) to govern the processing of digital personal data and tackle the privacy issues related to AI platforms.

5. Australia

The Australian Government announced its decision to implement a list of mandatory safeguards with respect to the development and deployment of high-risk AI solutions. This includes compulsory requirements as follows -

  1. Auditing and testing the AI system to ensure product safety and data security.
  2. Holding accountability about organization roles and responsibilities.
  3. Maintaining transparency and disclosures regarding AI model design, data usage, and labeling AI-generated content.

It is worth mentioning Australia's National Artificial Intelligence Ethics Framework. This framework lists the ethical principles that guide the AI development and implementation process. They are -

  • Human, societal, and environmental well-being
  • Human-centered values
  • Fairness
  • Reliability and safety
  • Privacy protection and security
  • Contestability
  • Transparency and explainability
  • Accountability

6. Canada

Canada is expected to regulate AI usage through its Artificial Intelligence and Data Act (AIDA) which was introduced in June 2022. This act forms a part of Bill C-27 and is Canada's first AI act. This act aims to reduce the impact of high-performance AI systems by subjecting them to stringent restrictive measures to reduce the risk of harm and promote transparency. This is true for the following scenarios -

  • Law enforcement
  • Emergency services and healthcare
  • Recruitment and hiring
  • Cost analysis
  • Identity verification
  • Online communication

Besides AIDA, there are a number of laws that are affecting AI regulations in Canada. The Privacy Act, Personal Information Protection and Electronic Documents Act (PIPEDA), Quebec labor law, and Canadian Human Rights Act to name a few. These laws are significantly impacted by several aspects of AI development and use.

With the help of AIDA, Canada wishes to offer a balanced approach toward AI regulations that supports responsible innovation and global exposure for Canadian businesses.

7. Singapore

Currently, Singapore does not have any regulations that directly regulate AI. However, it is one of the few countries that have a dedicated committee (government body) that presides over the country's digitization and technological innovation journey. This advisory council was formed to take care of two aspects related to AI -

  • Identify and address all ethics-related questions that may occur due to AI
  • Advise the government on the governance, ethical, and policy issues related to AI usage

This council helped the government address the emerging risks of AI technologies before they grew beyond control.

Besides this outstanding committee, the Singapore government has also developed a number of frameworks and tools. These frameworks allow responsible innovation in AI while safeguarding public interests and privacy. For instance, the AI Verify framework is a governance testing framework and toolkit that is designed to help organizations with AI system testing. This framework helped companies to validate the performance of their AI systems by conducting standardized tests. Similarly, the Model AI Governance Framework (commonly known as the 2020 framework) offered detailed guidance to organizations for addressing ethical and governance issues while deploying AI systems.

The Singapore government has also amended existing regulations to accommodate the development of AI technologies. For instance, the Road Traffic Act regulates the trial of AI-driven autonomous vehicles, and the Cybersecurity Act requires organizations to reveal all AI security methodologies and mechanisms to the system users.

8. United Arab Emirates (UAE)

Currently, the UAE does not have any dedicated AI regulations. The UAE government announced that AI will be a regulated activity. Businesses leveraging AI will have to obtain approvals from the relevant regulatory authorities to conduct AI-related activities in the country. Besides, it has established an AI ministry to offer appropriate resources and impart knowledge to make informed decisions. This ministry will also be responsible for overseeing this rapidly growing sector and promoting a secure relationship between AI capabilities and its responsible usage.

Here are a few more incentives and strategies introduced by the UAE government to regulate the development and deployment of AI within the country.

  • National Program for Artificial Intelligence - This program shares a vast library of resources that share the latest advances in the field of AI and robotics.
  • National Artificial Intelligence Strategy 2031 - This strategy lays down a framework for the general adoption of AI across various industries. It shares detailed policies, initiatives, and investments made by the UAE government for AI development.
  • AI Coding License - This license is a special license introduced for programmers who wish to develop intuitive AI tools and programs for UAE-based companies.
  • CEO for Artificial Intelligence - The UAE government has also created a new role as Chief Executive Officer for Artificial Intelligence, who will preside over and guide AI adoption nationwide.

9. South Korea

South Korea does not have any dedicated AI regulation. However, it is currently putting together comprehensive legislation in support of the development and usage of AI in the country. This legislation is named the AI Act and aims to introduce a degree of efficiency in the process of AI technology development. It lists a strict set of standards that organizations need to follow to both develop and deploy new AI systems. This act eliminates the need for government oversight and allows organizations to adopt a proactive approach toward AI system development. By helping companies develop AI systems in a compliant manner, South Korea is vigorously engaged in shaping its AI regulatory landscape.

There are various laws and policies in South Korea for regulating AI-related matters. A few of them are listed below.

  • The Personal Information Protection Act (PIPA) grants individuals the right to refuse automated decisions made by AI and also request explanations for such decisions.
  • The Korean New Deal focuses on sustainable AI research to create career opportunities for future generations.
  • The AI R&D Strategy aims to create an innovative AI ecosystem by analyzing the current state of AI technology and infrastructure to build outstanding AI projects.

10. Japan

Japan has made large investments in the research and development sector for many years. And, AI research and development is no different. Although the country does not have dedicated AI regulations in place, it does have a foolproof approach to supporting AI innovations while minimizing any harm that these innovations might cause.

The Ministry of Economy, Trade, and Industry (METI) of Japan published an AI regulatory policy containing guidelines to support companies in developing and deploying AI solutions. The Japanese government also formed another council called The Integrated Innovation Strategy Promotion Council that published a document listing principles of human-human-centric AI that companies need to keep in mind for AI to be accepted and used by society. These principles were

  • Human-centricity
  • Data protection
  • Safety
  • Fair competition
  • Fairness, transparency, and accountability
  • Education
  • Innovation

Besides, Japan has published another guideline named the Hiroshima International Guiding Principles for Organizations Developing Advanced AI Systems to share and establish guidelines for secure and trustworthy AI. The government also amended quite many of their existing policies and regulations such as the Road Traffic Act, Digital Platform Transparency Act, and so on.

What Does the Future of AI Regulation Look Like?

The future of AI regulation should be highly dynamic and adaptive, responding fast to technological advancements. The regulatory frameworks are expected to focus on ethical standards, transparency, and accountability so as to ensure fairness and non-discrimination of AI systems. In order to uphold public interests while facilitating innovation, it will be crucial for international collaboration to provide consistency across borders.

To create better and more informed policies, policymakers will increasingly work together with multi-disciplinary experts. Such regulations will tackle fundamental issues including privacy, security, and liability with the aim of a widespread distribution of AI’s benefits as well as mitigating associated risks. This future balanced AI regulation aims at supporting sustainable technology development by balancing between innovation and protection.

Deekshith Marla

Deekshith Marla

Making AI adaptable