Artificial Intelligence refers to software which can imitate human cognition and learning, enabling devices and machines to perform tasks which would otherwise require human input.
Artificial Intelligence (AI) is increasingly prevalent in our everyday lives and is continuing to grow in popularity and demand, with AI applications ranging from smart appliances and robotic manufacturing, to surveillance systems, and speech, image, and language processing. With innovations in AI on the rise globally, manufacturers of products that use AI software will need to ensure compliance with emerging AI regulations.
When regulating AI, typically a risk management approach is taken. The swift evolution and use of AI brings enormous benefits, but also the potential for harm in the spheres of safety, privacy, equality and more. This has prompted countries to begin regulating AI.
The EU is leading the way by proposing the first ever legal framework for AI globally which seeks to strike a balance between excellence in AI & trust in AI. It is a broad regulation, covering the use of AI in a wide range of applications and will have a significant impact on how companies develop, market and use smart digital technologies.
In the U.S. the National AI Initiative Act of 2020 (DIVISION E, SEC. 5001) became law on 1 January 2021, providing for a coordinated program across the entire Federal government to accelerate AI research and application for the Nation’s economic prosperity and national security.
Manufacturers of AI products need to be aware of developing obligations including:
- Ensuring they are not engaging in unacceptable risk systems which use exploitative or manipulative processes, i.e a social scoring system
- Assessing the level of risk their AI system will operate at. High risk systems, such as those that affect critical infrastructure, or software that affects workers performance reviews, will be regulated at a higher level
- Determining the level of autonomy the AI system will have. Some will require human input to prevent bias or unethical practices, some may require semi human interaction, and some will not require any human input
- Ensuring explainability and use of good data sets. It is unlikely that developing AI systems in the western world will be opaque and therefore high level explainability will be required to ensure a reduction of bias and ensure fairness
- Ongoing monitoring of AI systems with reporting protocols to ensure proper compliance management
- Adhering to labeling and conformity assessment requirements
Learn more about our Regulatory Coverage
Speak to one of our team today for more information on our regulatory content.