The Regulation uses a risk-based classification system to determine the level of regulatory oversight required:
- Unacceptable risk: AI systems deemed a clear threat to safety, fundamental rights or democratic values are banned outright.
- High risk: Systems used in sensitive areas such as healthcare, law enforcement or employment are subject to strict requirements, including risk assessments, documentation and human supervision.
- Limited risk: These systems must meet transparency requirements, such as informing users that they are interacting with AI.
- Minimal risk: Most AI systems fall into this category and are subject to minimal regulatory intervention.
This tiered approach balances innovation with the need to protect individuals and uphold public trust. Of these, high-risk AI systems are subject to stringent compliance obligations because of their potential impact on health, safety and fundamental rights.
Examples of high-risk applications include:
- Financial services: AI used in credit scoring and loan approvals is considered high risk due to its influence on financial access and data protection.
- Healthcare: Systems that assess patient eligibility or support clinical decision-making are classified as high risk because they can affect access to vital medical services.
- Transportation: AI in autonomous vehicles and traffic control systems must meet high safety and reliability standards to prevent harm.
- Critical infrastructure: AI managing utilities or public services must be designed with robust safeguards to ensure resilience and public safety.
Organisations must adopt rigorous governance practices to comply with the Act, ensuring transparency, fairness and accountability in all AI-related activities.
Requirements for high-risk AI systems
The Regulation uses a risk-based classification system to determine the level of regulatory oversight required:
Key obligations include:
- Risk assessments: Providers must establish robust risk management frameworks to identify, evaluate and address risks during development, deployment and use.
- Human supervision: Systems must include mechanisms that enable meaningful human intervention, allowing operators to monitor and override AI decisions where necessary.
- Transparency measures: Clear documentation must explain how the system functions, enabling users and regulators to understand its decision-making processes.
- Data governance: High-quality, representative datasets are required to reduce bias and ensure data accuracy and relevance.
- Mandatory documentation: Comprehensive technical records and logging systems must be maintained to support traceability and demonstrate regulatory compliance.
Obligations for providers and users of AI systems
Providers are responsible for developing AI systems and placing them on the market. Their obligations include:
- Conducting risk assessments and establishing risk management systems;
- Preparing and maintaining technical documentation and records;
- Ensuring high-quality data governance to minimise bias and ensure accuracy; and
- Providing transparency about how users interact with the AI system.
Users are responsible for deploying and operating AI systems. Their duties include:
- Following the provider’s instructions for proper system use;
- Implementing human oversight and technical safeguards;
- Conducting assessments to identify potential impacts on fundamental rights; and
- Monitoring system performance and reporting serious incidents or malfunctions.
Clearly defining and understanding these roles helps organisations allocate responsibilities effectively, avoid compliance errors and support the safe, transparent and ethical use of AI technologies.