Artificial-Intelligence-(AI)

Regulatory content

Artificial Intelligence

Artificial Intelligence refers to software which can imitate human cognition and learning, enabling devices and machines to perform tasks which would otherwise require human input.

Artificial intelligence (AI)

195

Countries Covered

157

Sources in C2P

Content Overview

Artificial Intelligence (AI) is increasingly prevalent in our everyday lives and is continuing to grow in popularity and demand, with AI applications ranging from smart appliances and robotic manufacturing, to surveillance systems, and speech, image, and language processing. With innovations in AI on the rise globally, manufacturers of products that use AI software will need to ensure compliance with emerging AI regulations.

When regulating AI, typically a risk management approach is taken. The swift evolution and use of AI brings enormous benefits, but also the potential for harm in the spheres of safety, privacy, equality and more. This has prompted countries to begin regulating AI.

The EU is leading the way by proposing the first ever legal framework for AI globally which seeks to strike a balance between excellence in AI & trust in AI. It is a broad regulation, covering the use of AI in a wide range of applications and will have a significant impact on how companies develop, market and use smart digital technologies.

In the U.S. the National AI Initiative Act of 2020 (DIVISION E, SEC. 5001) became law on 1 January 2021, providing for a coordinated program across the entire Federal government to accelerate AI research and application for the Nation’s economic prosperity and national security.

Manufacturers of AI products need to be aware of developing obligations including:

  • Ensuring they are not engaging in unacceptable risk systems which use exploitative or manipulative processes, i.e a social scoring system
  • Assessing the level of risk their AI system will operate at. High risk systems, such as those that affect critical infrastructure, or software that affects workers performance reviews, will be regulated at a higher level
  • Determining the level of autonomy the AI system will have. Some will require human input to prevent bias or unethical practices, some may require semi human interaction, and some will not require any human input
  • Ensuring explainability and use of good data sets. It is unlikely that developing AI systems in the western world will be opaque and therefore high level explainability will be required to ensure a reduction of bias and ensure fairness
  • Ongoing monitoring of AI systems with reporting protocols to ensure proper compliance management
  • Adhering to labeling, conformity assessment and product liability requirements

Coverage Included

Our regulatory content in C2P is historically comprehensive with a robust QA process to ensure quality, consistency and accuracy. Below is a high level summary of our coverage for this topic:
  • EU: Harmonised Rules on Artificial Intelligence, Regulation (EU) 2024/1689
  • EU: Standardisation Request to CEN and CENELEC Regarding High-risk AI Systems in Support of Regulation (EU) 2024/1689, Implementing Decision C(2025)3871
  • USA: Artificial Intelligence National Policy Framework, Executive Order 14365, December 2025
  • South Korea: Framework Act on the Advancement of Artificial Intelligence and the Establishment of Trust-Based Systems, Law No. 20676, 2025
  • Council of Europe: Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, CETS 225, September 2024
  • Kyrgyz Republic: Digital Data, Draft Code, August 2023
  • China: Measures for Identifying AI-Generated Content, Announcement No. 2, 2025
  • Canada: The Artificial Intelligence and Data Act (AIDA), Companion Document, March 2023
  • Canada: Responsible Development and Management of Advanced Generative AI Systems, Voluntary Code of Conduct, 2023
  • South Korea: Establishment of Korean Industrial Standards (KS) for Certain Artificial Intelligence (AI), Notice No. 2023-612
  • India: Artificial Intelligence (AI) Strategy, Expert Group Report, October 2023
  • UK: National AI Strategy, September 2021

We also cover the use of AI algorithms in relation to our core products. These are typically defined as “a set of instructions or rules that enable machines to learn, analyze data and make decisions based on that knowledge and ergo can perform tasks that would typically require human intelligence, such as recognizing patterns, understanding natural language, problem-solving and decision-making”. Given the nature of the landscape of the regulation of AI, these can be high level, very technical in nature and often don’t specify products as of yet.

Learn more about our Regulatory Coverage

Speak to one of our team today for more information on our regulatory content.

Other Regulatory Content

Related Coverage