Will the AI Act kill innovation in European banks?

by Marine Lecomte - Offers and Innovations Manager for Financial Services and Vincent Lefèvre - Regulatory Tribe Director Sopra Steria Next
| minute read

Artificial Intelligence (AI) has the potential to transform the banking sector - for example, by revolutionising credit scoring. But could this wave of innovation be significantly slowed down? With the AI Act, the European Union becomes the first region in the world to regulate the emerging field of AI. This regulation undoubtedly serves a noble purpose: protecting European citizens from AI-related risks such as lack of transparency, algorithmic bias and more. However, by arriving so early in AI’s development cycle, will this regulation slow the adoption of this technology in Europe, particularly in the banking sector? 

The AI Act: a generalist framework with limited added value? 

The AI Act establishes a regulatory framework for the use of artificial intelligence. This framework is the result of long debates: the initial work began in 2017, leading to a draft text in April 2021, and finally coming into effect on August 1, 2024 - seven years later. This is an eternity in the AI world, where technology evolves at lightning speed. The rapid rise of generative AI, just as the text was being finalised in late 2022, illustrates how difficult it is to legislate in a constantly evolving field. 

Barely adopted, the AI Act is already a subject of division: some find it too general, while others criticise it for being too strict compared to more flexible approaches, such as those now favoured in the United States. It is striking that the first application deadline for the European text, set for February 2, 2025, almost coincides with the revocation of the White House Executive Order on AI, highlighting the United States' prioritisation of fostering innovation above all else. 

Setting politics aside, in practice, the AI Act appears to require a level of compliance that banks can easily meet. According to Denis Beau, First Deputy Governor of the Banque de France, the requirements for so-called “high-risk” AI systems align with well-established practices, such as risk management, data governance and cybersecurity. In essence, the regulation introduces nothing radically new for the financial sector. 

Banking sector: 4 key impacts of the AI Act to anticipate 

Should the AI Act’s impact be minimised and viewed as merely an addition to existing regulations? That would be a premature conclusion. Even if it is not revolutionary, the AI Act requires rigorous and operational monitoring of AI solutions, resulting in 4 key impacts: 

1. Ban on "dangerous uses" 

Starting in February 2025, the AI Act will prohibit AI applications deemed dangerous by legislators. For instance, any processing that could lead to discrimination or disadvantage an individual based on their AI-assigned classification will be banned. Likewise, the regulation prohibits AI from “predicting the likelihood of a person committing a criminal offence solely based on profiling or the evaluation of personality traits or characteristics.” 

In practice, banks are not involved in these prohibited uses. 

2. Regulation of "high-risk uses" 

However, by August 2026, banks will need to comply with the AI Act’s requirements for “high-risk” use cases. 

For the financial sector, these use cases specifically involve creditworthiness assessments for granting loans to individuals and pricing life and health insurance. Whether you're the AI system provider developing the models or a “deployer” using them, you will be responsible for ensuring security and transparency. That means risk management processes, technical documentation, data quality monitoring, and continuous oversight. AI systems must meet strict controls to prevent discrimination or bias, be traceable, and remain under human supervision. 

This framework does not constitute a disruption but rather builds on existing regulations, such as the 2022 recommendations around credit granting and monitoring, which already established the right to a reassessment of AI-driven decisions. However, the AI Act goes further by standardising and strengthening these obligations at the European level. 

3. Supervision and oversight rules 

Supervising banks in this evolving context is also a major challenge. In France, the Autorité de Contrôle Prudentiel et de Résolution (ACPR) appears poised to play a central role. Denis Beau recently stated that “the financial supervisor itself is developing expertise and adapting its tools and methods. We will likely need to gradually establish a doctrine on new topics.” In this respect, the AI Act’s framework could facilitate interactions between banks and regulators. 

4. Increased sanctions 

The enforcement aspect reinforces the importance of proactively addressing the regulation’s requirements. The AI Act’s penalties are particularly severe, reaching up to 7% of global annual revenue or €35 million. 

The AI Act: an essential framework for the banks of tomorrow 

While the AI Act does not represent a major revolution, it establishes a regulatory foundation that the banking sector cannot ignore. For banks, compliance is not just about meeting requirements but about anticipating them. This means actively following the regulator’s recommendations while taking concrete steps to align with this new framework. Key actions include raising awareness among all stakeholders about new obligations, defining roles according to the concepts of “providers” and “deployers,” and systematically identifying and classifying AI use cases. 

Now, banks must shift from pilot initiatives to industrialised deployments, making AI a driver of performance and differentiation. 

Sopra Steria is committed to supporting banks in transforming AI Act challenges into opportunities. Beyond our expertise in governance, technical audits, and regulatory compliance, we place people at the heart of this transition. We offer tailored training programs to educate and guide teams in mastering new obligations. By combining innovation, security and ethics, we help banks develop responsible AI while strengthening trust among employees and customers. 

Search

artificial-intelligence

design

Related content

Responsible artificial intelligence

As organisations race to seize AI’s benefits, prioritising responsibility is key. Embracing responsible AI practices is not just about staying ahead but building a sustainable competitive advantage. 

AI and cybersecurity

In today’s digital age, traditional cybersecurity measures are no longer sufficient. Cyber threats are evolving rapidly, and adopting innovative solutions is essential to protect your business. Discover how AI is revolutionizing cybersecurity and giving you a strategic edge. 

Digital Banking Experience Report 2023 The AI-enabled banking era

Banks must leverage their trust capital if they are not to lose market share to tech giants broadening their offer into financial services. Our Digital Banking Experience Report 2023 outlines the key trends globally shaping banking in the hyper-connected era.