Get a quote

The EU AI Act

18 November 2025

Knowledge

AI Governance

What it is and why it matters for your organisation

What is the EU AI Act?

The EU AI Act is a pioneering EU regulation designed to ensure that AI systems used in the European Union are “safe, transparent, traceable, non-discriminatory and environmentally friendly”. The law sets out risk-based rules for AI providers and users, and its extraterritorial scope means it applies to all AI systems offered in the EU, regardless of where providers are based, making it a global benchmark for AI regulation.

Why was the EU AI Act created?

The EU AI Act fills a regulatory gap by establishing clear rules to protect fundamental rights, promote transparency and ensure the ethical use of AI technologies across all sectors.

Key risks addressed by the Act include:

  • Algorithmic bias: AI systems that produce discriminatory outcomes, particularly in areas like hiring or law enforcement.
  • Privacy violations: Risks posed by AI-driven surveillance and intrusive data processing.
  • Unethical applications: Using AI for social scoring, behavioural manipulation or other practices that undermine individual autonomy.

Key requirements of the EU AI Act

The Regulation uses a risk-based classification system to determine the level of regulatory oversight required:

  • Unacceptable risk: AI systems deemed a clear threat to safety, fundamental rights or democratic values are banned outright.
  • High risk: Systems used in sensitive areas such as healthcare, law enforcement or employment are subject to strict requirements, including risk assessments, documentation and human supervision.
  • Limited risk: These systems must meet transparency requirements, such as informing users that they are interacting with AI.
  • Minimal risk: Most AI systems fall into this category and are subject to minimal regulatory intervention.

This tiered approach balances innovation with the need to protect individuals and uphold public trust. Of these, high-risk AI systems are subject to stringent compliance obligations because of their potential impact on health, safety and fundamental rights.

Examples of high-risk applications include:

  • Financial services: AI used in credit scoring and loan approvals is considered high risk due to its influence on financial access and data protection.
  • Healthcare: Systems that assess patient eligibility or support clinical decision-making are classified as high risk because they can affect access to vital medical services.
  • Transportation: AI in autonomous vehicles and traffic control systems must meet high safety and reliability standards to prevent harm.
  • Critical infrastructure: AI managing utilities or public services must be designed with robust safeguards to ensure resilience and public safety.

Organisations must adopt rigorous governance practices to comply with the Act, ensuring transparency, fairness and accountability in all AI-related activities.

Requirements for high-risk AI systems

The Regulation uses a risk-based classification system to determine the level of regulatory oversight required:

Key obligations include:

  • Risk assessments: Providers must establish robust risk management frameworks to identify, evaluate and address risks during development, deployment and use.
  • Human supervision: Systems must include mechanisms that enable meaningful human intervention, allowing operators to monitor and override AI decisions where necessary.
  • Transparency measures: Clear documentation must explain how the system functions, enabling users and regulators to understand its decision-making processes.
  • Data governance: High-quality, representative datasets are required to reduce bias and ensure data accuracy and relevance.
  • Mandatory documentation: Comprehensive technical records and logging systems must be maintained to support traceability and demonstrate regulatory compliance.

 Obligations for providers and users of AI systems

Providers are responsible for developing AI systems and placing them on the market. Their obligations include:

  • Conducting risk assessments and establishing risk management systems;
  • Preparing and maintaining technical documentation and records;
  • Ensuring high-quality data governance to minimise bias and ensure accuracy; and
  • Providing transparency about how users interact with the AI system.

Users are responsible for deploying and operating AI systems. Their duties include:

  • Following the provider’s instructions for proper system use;
  • Implementing human oversight and technical safeguards;
  • Conducting assessments to identify potential impacts on fundamental rights; and
  • Monitoring system performance and reporting serious incidents or malfunctions.

Clearly defining and understanding these roles helps organisations allocate responsibilities effectively, avoid compliance errors and support the safe, transparent and ethical use of AI technologies.

Who does the EU AI Act apply to?

Geographical scope and global applicability

The EU AI Act applies to all organisations whose AI systems are used or have an impact within the EU. This extraterritorial reach ensures that all AI systems that affect people within the EU meet the same regulatory standards, regardless of origin.

Non-EU organisations must comply if they:

  • Place AI systems on the EU market; or
  • Use AI systems whose output affects people within the EU.

Consequences of non-compliance

Financial and legal penalties

The EU AI Act introduces a tiered penalty structure designed to enforce compliance and deter irresponsible AI use. Fines are proportionate to the severity of the breach:

  • Up to €35 million (about £30 million) or 7% of annual global turnover for engaging in prohibited practices, such as social scoring or manipulative AI systems that pose serious risks to individual rights.
  • Up to €15 million (about £13 million) or 3% of annual global turnover for failing to meet key obligations, including risk assessments, transparency measures and technical documentation.
  • Up to €7.5 million (about £6.5 million) or 1% of annual global turnover for supplying incorrect, incomplete or misleading information to regulators.

Operational and reputational damage

As well as risking fines, organisations that do not comply with the Act risk:

  • Reputational damage: Public exposure of unethical AI practices, such as biased hiring algorithms, can erode brand trust and deter both customers and prospective employees.
  • Loss of customer and partner trust: Non-compliance can strain business relationships, reduce investor confidence and limit opportunities for collaboration.
  • Operational disruption: AI systems that fail to meet regulatory standards may be withdrawn from use, causing costly delays and impacting service delivery.

Why complying creates competitive advantage

Operational and strategic benefits of proactive compliance

Proactively aligning your AI usage with the EU AI Act provides a range of strategic and operational benefits, including:

  • Streamlined audit readiness: Early preparation ensures that documentation, risk assessments and supervision processes are in place, reducing the likelihood of fines and reputational harm.
  • Improved risk management: Identifying and addressing AI-related risks early minimises the chance of operational disruption and supports safer system deployment.
  • Competitive advantage: Organisations that adopt the Act’s requirements ahead of time can demonstrate leadership in ethical AI, building trust with customers, investors and partners.

How organisations can prepare for compliance

Begin by evaluating your current AI practices and identifying areas for improvement. A structured approach will help ensure you align with regulatory requirements and supports responsible AI deployment.

Recommended steps include:

  • Conduct a gap analysis: Review existing AI policies, procedures and systems to identify where current practices fall short of the Act’s requirements, particularly for high-risk applications.
  • Classify AI systems: Assess each AI system to determine its risk category – unacceptable, high, limited or minimal – and apply the corresponding compliance measures.
  • Develop or update an AI policy: Include restrictions on prohibited uses, outline ethical guidelines and ensure policies reflect your organisation’s commitment to transparency and accountability.
  • Establish AI governance structures: Appoint a dedicated team or officer to oversee AI compliance, manage risk and maintain communication with stakeholders.

These steps lay the foundation for long-term compliance, help mitigate regulatory and reputational risks, and demonstrate leadership in ethical AI use.

How ISO 42001 supports EU AI Act compliance

ISO 42001 provides a structured framework that aligns closely with the requirements of the EU AI Act, particularly around risk management and ethical AI deployment. By adopting this standard, organisations can establish the processes needed to manage AI responsibly and demonstrate regulatory compliance.

Key areas of alignment include:

  • AI governance structures: ISO 42001 helps define roles, responsibilities and supervision mechanisms that support accountability and transparency, echoing the governance expectations of the EU AI Act.
  • Risk assessments: The Standard requires organisations to identify, evaluate and mitigate AI-related risks – an essential step for managing high-risk AI systems under the Act.
  • Continual improvement: Regular system reviews and updates ensure ongoing compliance as regulatory requirements and technologies evolve.

These steps lay the foundation for long-term compliance, help mitigate regulatory and reputational risks, and demonstrate leadership in ethical AI use.

How we can help
We offer a tailored suite of ISO 42001 services to support you at every stage. Whether you're starting from scratch or preparing for certification, our consultants can help you build a management system that supports strong governance, regulatory alignment and long-term trust in your AI initiatives.