AI regulation in Europe: the first comprehensive framework![]() (EU) 2024/1689 What is the purpose of this new AI regulation? To set clear, consistent rules for how AI systems are developed, used, and sold in the EU. The goal is to encourage human-centred, trustworthy AI while protecting people’s health, safety, rights, democracy, and the environment—and supporting innovation. Who does this regulation apply to? It applies to anyone who develops, sells, or uses AI systems or general-purpose AI models in the EU market. What counts as an AI system? An AI system is any machine-based system that uses data to make predictions, create content, give recommendations, or make decisions—on its own or with limited human input—and can adapt. What types of AI are banned under this regulation? The following AI uses are not allowed in the EU:
What is a high-risk AI system (subject to restrictions)? High-risk AI systems are: Systems built into products that are regulated in the EU, as a safety component and are required to undergo a 3rd party conformity assessment or are themselves regulated products that are required to undergo a 3rd party conformity assessment. What kinds of AI are likely to be considered high-risk (subject to restrictions or other requirements)? These AI systems could be high-risk if they can significantly harm people or unfairly influence decisions:
What do I need to do if I consider an AI system (that may be high-risk) to be not high-risk? You’ll need to document your assessment before placing the system on the market or putting it into service. You'll also be subject to a registration obligation. What are the requirements for a high-risk AI system?
What else does a provider need to do?
When do these rules apply? The regulation’s main obligations apply from 2 August 2026. However, prohibited AI practices must cease from 2 February 2025. Comments are closed.
|
Global Regulatory Product Compliance UpdatesCategories
All
|