Regenerative Business Development
  • About
  • Services
  • Industry Updates
  • FAQ
  • Contact

Regulatory and Product Safety Updates

Stay informed with updates on global regulatory changes and product safety trends

SUBSCRIBE TO MY NEWSLETTER

Are you familiar with the new harmonised rules on Artificial Intelligence (AI)Regulation in the EU and how it might impact your products?

10/4/2025

 

AI regulation in Europe: the first comprehensive framework

Futuristic AI Image
(EU) 2024/1689

What is the purpose of this new AI regulation?
To set clear, consistent rules for how AI systems are developed, used, and sold in the EU. The goal is to encourage human-centred, trustworthy AI while protecting people’s health, safety, rights, democracy, and the environment—and supporting innovation.

Who does this regulation apply to?
It applies to anyone who develops, sells, or uses AI systems or general-purpose AI models in the EU market.

What counts as an AI system?
An AI system is any machine-based system that uses data to make predictions, create content, give recommendations, or make decisions—on its own or with limited human input—and can adapt.

What types of AI are banned under this regulation?
The following AI uses are not allowed in the EU:
  • Manipulative AI: AI that uses hidden or deceptive techniques to influence decisions in harmful ways.
  • Exploiting vulnerabilities: AI that takes advantage of a person’s age, disability, or social/economic status to influence decisions in harmful ways.
  • Social scoring: AI that ranks people based on behaviour or traits, leading to unfair treatment.
  • Predicting crime: AI that judges whether someone might commit a crime based only on profiling or personality traits (except when supporting decisions based on verified facts).
  • Mass facial recognition: AI that builds facial recognition databases by scraping the internet or CCTV footage.
  • Emotion detection at work or school: AI that reads emotions in workplaces or education settings, unless it’s for medical or safety reasons.
  • Biometric categorisation: AI that categorises people using sensitive data (like race or beliefs), unless it’s part of a lawful, specific dataset for law enforcement.
  • Real-time biometric ID in public spaces by law enforcement, unless strictly necessary for things like searching for a missing child or preventing a terrorist threat.

What is a high-risk AI system (subject to restrictions)?
High-risk AI systems are:
Systems built into products that are regulated in the EU, as a safety component and are required to undergo a 3rd party conformity assessment or are themselves regulated products that are required to undergo a 3rd party conformity assessment.

What kinds of AI are likely to be considered high-risk (subject to restrictions or other requirements)?
These AI systems could be high-risk if they can significantly harm people or unfairly influence decisions:
  • Biometric tech: Systems for identifying people remotely or categorising them based on sensitive traits.
  • Emotion recognition: AI that detects emotions in people.
  • Critical infrastructure: AI used to manage roads, electricity, water, gas, or digital systems.
  • Education: AI used for admissions, evaluating students, guiding learning, or monitoring test conduct.
  • Employment: AI used for hiring, promotions, firing, monitoring workers, or assigning tasks.
  • Essential services: AI that decides access to public services like healthcare or financial support.
  • Credit and insurance: AI used to assess credit scores or determine life/health insurance risk.
  • Emergency response: AI used to assess emergency calls, triage patients, or dispatch services.
  • Law enforcement: AI used to assess crime risk, detect lies, evaluate evidence, or create profiles.
  • Migration and border control: AI used to assess risks, review visa/asylum applications, or identify people crossing borders.
  • Justice system: AI used by judges to apply laws, assess evidence, or help resolve disputes.
  • Elections: AI used to influence voting decisions or election outcomes.
 
What do I need to do if I consider an AI system (that may be high-risk) to be not high-risk?
You’ll need to document your assessment before placing the system on the market or putting it into service. You'll also be subject to a registration obligation.
 
What are the requirements for a high-risk AI system?
  1. A risk management system
  2. Data governance and management practices
  3. Technical documentation
  4. Specific record-keeping
  5. Transparency and provision of information to users
  6. Human oversight
  7. Accuracy, robustness, and cybersecurity throughout the system’s lifecycle
 
What else does a provider need to do?
  • Indicate their trade name and address on the product, packaging, or accompanying documentation
  • Have a quality management system in place
  • Keep logs
  • Undergo a conformity assessment procedure
  • Draw up an EU declaration of conformity
  • Affix the CE mark
  • Register the product and authorised representative
  • Appoint an authorised representative in the EU (if not established there already)
 
When do these rules apply?
The regulation’s main obligations apply from 2 August 2026.
However, prohibited AI practices must cease from 2 February 2025.



Comments are closed.

    Global Regulatory Product Compliance Updates

    Categories

    All
    Airfreighting Batteries
    Batteries
    Data
    EU AI
    EU Cybersecurity
    EU General Data Protection Regulation
    EU POPs Regulation
    EU RoHS
    EU Safety Gate And Recalls
    Materials Compliance
    New EU Battery Regulation
    New EU Ecodesign For Sustainable Products Regulation
    New EU General Product Safety Regulation GPSR
    Understanding Legislation And Standards
    USA Proposition 65

    RSS Feed

Home
About
Services
Industry Updates
FAQ
Contact
Newsletter Archive
Privacy Policy

Regenerative Business Development Logo
Copyright 2024, Regenerative Business Development, New Zealand
  • About
  • Services
  • Industry Updates
  • FAQ
  • Contact