Regulatory content
Artificial Intelligence
Artificial Intelligence refers to software which can imitate human cognition and learning, enabling devices and machines to perform tasks which would otherwise require human input.
Artificial intelligence (AI)
195
Countries Covered
157
Sources in C2P
Content Overview
Artificial Intelligence (AI) is increasingly prevalent in our everyday lives and is continuing to grow in popularity and demand, with AI applications ranging from smart appliances and robotic manufacturing, to surveillance systems, and speech, image, and language processing. With innovations in AI on the rise globally, manufacturers of products that use AI software will need to ensure compliance with emerging AI regulations.
When regulating AI, typically a risk management approach is taken. The swift evolution and use of AI brings enormous benefits, but also the potential for harm in the spheres of safety, privacy, equality and more. This has prompted countries to begin regulating AI.
The EU is leading the way by proposing the first ever legal framework for AI globally which seeks to strike a balance between excellence in AI & trust in AI. It is a broad regulation, covering the use of AI in a wide range of applications and will have a significant impact on how companies develop, market and use smart digital technologies.
In the U.S. the National AI Initiative Act of 2020 (DIVISION E, SEC. 5001) became law on 1 January 2021, providing for a coordinated program across the entire Federal government to accelerate AI research and application for the Nation’s economic prosperity and national security.
Manufacturers of AI products need to be aware of developing obligations including:
- Ensuring they are not engaging in unacceptable risk systems which use exploitative or manipulative processes, i.e a social scoring system
- Assessing the level of risk their AI system will operate at. High risk systems, such as those that affect critical infrastructure, or software that affects workers performance reviews, will be regulated at a higher level
- Determining the level of autonomy the AI system will have. Some will require human input to prevent bias or unethical practices, some may require semi human interaction, and some will not require any human input
- Ensuring explainability and use of good data sets. It is unlikely that developing AI systems in the western world will be opaque and therefore high level explainability will be required to ensure a reduction of bias and ensure fairness
- Ongoing monitoring of AI systems with reporting protocols to ensure proper compliance management
- Adhering to labeling and conformity assessment requirements
Coverage Included
Our regulatory content in C2P is historically comprehensive with a robust QA process to ensure quality, consistency and accuracy. Below is a high level summary of our coverage for this topic:
- EU: Harmonised Rules on Artificial Intelligence, Draft Regulation, April 2021
- USA: Artificial Intelligence and Machine Learning In Consumer Products, CPSC Report, May 2021
- EU: Initial Appraisal of EU Commission Impact Assessment on New Proposed Artificial Intelligence Act, Briefing, July 2021
- China: Principles for the Classification of Artificial Intelligence-based Medical Softwares, Notice No. 47, 2021
- Canada: Proposal for Ensuring Appropriate Regulation of Artificial Intelligence, Consultation Document, February 2020
- ISO/IEC TR 24028:2020 Information technology – Artificial intelligence – Overview of trustworthiness in artificial intelligence, 2020
Learn more about our Regulatory Coverage
Speak to one of our team today for more information on our regulatory content.