What businesses need to know about the EU AI Act: your questions answered

| minute read

Every business leader remembers the scramble to comply with the EU General Data Protection Regulation (GDPR), the groundbreaking world-first regulation protecting EU citizens’ personal data. We have also seen the heavy fines still being administered to those businesses that don’t clear the bar. 

Now, a similar shift is occurring in the world of artificial intelligence (AI), with the EU imposing a new world-first regulation: the EU AI Act. Designed to steer the course of this groundbreaking technology and limit its potentially damaging impact on society, the Act sets out clear responsibilities for organisations using AI.  

We’ve already created a high-level summary of the legislation, but below are the key practical questions organisations need to ask to get started with AI Act compliance. 

Does my system count as an “AI system” according to the EU AI Act? 

The AI Act defines an “AI system” as a machine-based system that: 

  • is designed to operate with varying levels of autonomy 
  • can adapt after deployment 
  • infers, based on the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions 

The inferences made by the AI system might be based on its explicit objectives, such as an AI-enabled image recognition classifier that labels objects in photo input. They might also be based on implicit objectives the system “learns” once it is in operation. An example of these implicit objectives comes from ChatGPT, which is acquiring implicit objectives all the time by imitating human-generated content and incorporating human feedback – for example, sounding more “human” in its responses. 

Who is affected by the EU AI Act? 

The EU AI Act makes a distinction between different types of entities in relation to AI:

  • Providers. AI providers are any natural or legal person, public authority, agency, or other body that develops an AI system or general-purpose AI (GPAI) model and puts it on the market or into service under its own name or trademark, whether they do this for payment or free of charge. One example of an AI provider is Google, the developer of the AI app Gemini. 
  • Deployers. This refers to any entity that uses an AI system under its own authority, except in a personal, non-professional capacity. For example, an insurance company that uses an AI model for risk calculations would be a deployer. 
  • Distributors. These are any entities in the supply chain – other than providers and importers – that make an AI system available on the EU market.  
  • Importers. These are any entities established inside the EU that put an AI system on the market when that system has the name or trademark of an entity established outside the EU.  
  • Authorised representatives. These are intermediaries between AI providers outside the EU and the EU authorities. They have a mandate from the provider to perform the tasks associated with compliance – for example, if a US-based AI developer paid a consulting firm to oversee compliance with the AI Act. 
  • Product manufacturers. These are entities that put an AI system on the market or into service with their product and under their own name or trademark.  

What do you have to do to demonstrate compliance with the EU AI Act? 

First, it is recommended that organisations conduct an inventory of their current AI practices and their AI strategies for the near future. Even if you’re not using any AI systems at the moment, this will likely change in the next few years – and when that happens, you need to be ready to comply with the regulation. 

Based on this initial assessment, classify the systems you use or plan to use based on the risk levels defined in the EU AI Act. These each require different actions for compliance: 

  • Unacceptable risk systems generally are prohibited under the AI Act. These include technologies that are considered dangerous such as AI systems that manipulate people’s behaviour or exploit their vulnerabilities. 
  • High risk systems are subject to the bulk of the obligations. For instance, they must undergo a conformity assessment before release onto the market; be registered in an EU database; demonstrate an appropriate risk management system and show proper data governance. An example of a high-risk system is a credit scoring system. 
  • Limited risk systems must demonstrate transparency – for example, a customer service chatbot making it clear to users that it is AI. 
  • Minimal risk systems must simply implement a Code of Conduct around ethical AI. 

There are also separate requirements for all GPAI providers, for instance, compliance with the Copyright Directive and providing technical documentation.  

When is the deadline for me to demonstrate compliance with the EU AI Act? 

The deadlines for compliance with the EU AI Act begin in 2025, despite the Act itself coming into effect in August 2024. After these dates, any new AI models must be compliant with the regulations at the time of launch. Here’s the basic timeline:  

  • Prohibited systems must be offline by February 2025 
  • Rules for the GPAI start to apply as of August 2025 
  • Rules for High-risk AI systems under Annex III start to apply from August 2026 
  • Rules for High-risk AI systems under Annex I and Large scale IT systems under Annex X start to apply from August 2027 

However, the law does give you a period to achieve compliance. For instance, if you launch a GPAI system before 2 August 2025, you need to comply with the law by 2 August 2027. For more details, read our expert blog

What are the penalties for non-compliance with the EU AI Act? 

The penalties for non-compliance with the EU AI Act are severe. Companies found to be deploying prohibited AI systems will face fines of up to €35 million or 7% of their total worldwide annual turnover for the previous year, whichever is higher.  

Fines for transgressions of other regulations in the Act occasion fines of up to €15 million or 3% of turnover, and fines for supplying incomplete, incorrect, or misleading information to regulators are up to €7.5 million or 1% of turnover. 

This legislation is only the beginning, as it is likely to set a precedent for AI regulation around the world. Being proactive about compliance and baking ethical practices into your AI strategies from the start is crucial to long-term business success with AI, because AI needs a human hand to steer its course.  

To find out how to do this for your organisation, read our report.

 

Discover why AI is nothing without you

At Sopra Steria, we believe AI’s true potential is unlocked with human collaboration. By blending human creativity with advanced AI technology, we empower people to address society’s most pressing challenges—from combating disease to mitigating climate change—while helping our clients achieve their digital transformation goals.

We emphasize critical thinking and education to ensure AI upholds core human values like respect and fairness, minimizing ethical risks. Together, we’ll create a future where AI inspires positive impact and enhances human brilliance. That's why we believe that AI is nothing without you!

DISCOVER MORE

Search

artificial-intelligence

Related content

Responsible artificial intelligence

As organisations race to seize AI’s benefits, prioritising responsibility is key. Embracing responsible AI practices is not just about staying ahead but building a sustainable competitive advantage. 

AI and cybersecurity

In today’s digital age, traditional cybersecurity measures are no longer sufficient. Cyber threats are evolving rapidly, and adopting innovative solutions is essential to protect your business. Discover how AI is revolutionizing cybersecurity and giving you a strategic edge. 

Digital Banking Experience Report 2023 The AI-enabled banking era

Banks must leverage their trust capital if they are not to lose market share to tech giants broadening their offer into financial services. Our Digital Banking Experience Report 2023 outlines the key trends globally shaping banking in the hyper-connected era.