AI data and ethics: Financial services firms must ready themselves for looming AI laws

Banks and insurers in the EU need to move quickly to ensure their AI systems comply with looming regulatory requirements. But their commitment to responsible AI should go beyond regulatory compliance and instead support an ethical approach to business.

19/12/2024 Perspective
Simon Cashmore
Qorus Copywriter

Banks and insurers in the EU need to move quickly to ensure their AI systems comply with fast-approaching regulatory requirements. But their commitment to responsible AI should go beyond regulatory compliance and instead support an ethical approach to business.


Financial services firms are eager to adopt AI to improve their operational efficiency and enhance customer service. However, few organizations are prepared for regulations that will soon require them to record all AI systems associated with their businesses and determine their level of risk.

The European Union AI Act, the world's first comprehensive AI law, came into force in August this year. Companies will need to comply with its initial requirements by February. Further deadlines are set to come into effect over the next six years (See diagram below).

“The EU AI Act doesn’t just affect companies in Europe. It also applies to organizations whose AI output is used in the EU, regardless of where they are headquartered,” says Bernadette Wesdorp, Financial Services AI Leader and Director at EY.

The organizations most affected by the AI Act, according to Wesdorp, are “providers” of AI systems and “deployers” that are using AI systems. The EU regulations are intended to ensure that AI systems are safe, transparent and don’t discriminate. Breaches of the EU AI Act could result in fines of as much as €35 million.

“At EY we view responsible AI through the lens of the three Rs – regulation, reputation and realization.” Bernadette Wesdorp, EY

At a recent online event, hosted by the Qorus Reinvention Community, Wesdorp noted that only 11% of businesses are fully prepared for the new AI regulations.  A further 42% have limited preparedness. 

Wesdorp adds that companies need to employ responsible AI processes and practices not merely to meet regulatory requirements. They should be part of the ethical framework that underpins how a company conducts business.

“At EY we view responsible AI through the lens of the three Rs – regulation, reputation and realization. Regulation is important but unethical AI practices can also carry serious reputational risks. And at the end of the day, companies want to realize value and the best way to do that is to build highly trusted high-performing technologies.”

The critical importance of trust in AI applications was highlighted by many of the banking executives who attended the online event. While more than two thirds of the executives polled at the event are confident the banking industry is implementing AI ethically, the remainder have doubts. 

According to recent EY research, 86% of businesses have yet to put AI ethics frameworks or functions in place. What’s more, 66% of businesses lack clear senior management accountability for AI implementations.

“Responsible AI and ethics teams can appear to be a kind of censor that somehow stops fast innovation but in our experience that’s not true.” Nicole Inverardi, Intesa Sanpaolo.

Intesa Sanpaolo, Yapi Kredi, Scotiabank and Nedbank representatives outlined their organizations’ approach to responsible AI and ethics during the Qorus online event. They stressed the importance of ensuring that AI systems and the data they generate are accurate, fair and unbiased. But they acknowledged major challenges to achieving those goals.

“Responsible AI and ethics teams can appear to be a kind of censor that somehow stops fast innovation but in our experience that’s not true. Strong co-operation between them and developers can create guardrails from the beginning of projects that ensure they run smoothly. That’s important because building AI systems is not cheap,” says Nicole Inverardi, Data Science and AI Ethicist, at Intesa Sanpaolo.

Six challenges when implementing responsible AI

Speakers at the event identified six major challenges that confront financial services firms when they roll out responsible AI and ethical frameworks.

1. Assessing AI risk among vendors

“AI vendor risk is a very a different beast to internal risk. I thought managing internal risk was difficult, making sure everyone was on the same page and dealing with a lot of stakeholders. But you don’t realize how much control you actually have until you start trying to get information from vendors,” says Julian Granka Ferguson, Acting Director, Data Ethics and Responsible AI, at Scotiabank in Canada.

Ferguson adds that the advent of GenAI has increased the challenge of keeping track of AI vendor risk.

“Financial institutions rely on a lot of vendors. You have to work with new vendors coming on board but you also have to work with plenty of vendors that already have contracts with bank. Now with GenAI, existing vendors can suddenly flip a switch and they have a new AI service. How are we to know they’ve just started to leverage AI?”

2. Identifying bias in AI data

“We can do bias assessment on data, but how can we do that if the training set data is external or is undefined as it is in case of large language models?” says Inverardi, at Intesa Sanpaolo.

Prenton Chetty, Head of Data Science, at Nedbank in South Africa points out that the challenge of identifying bias is particularly demanding in a multi-ethnic country where much of the publicly available data is systemically biased.

“We've got a historically biased past because of apartheid.  We don't want to perpetuate biases that are historically in our data. You can build the best models but if the underlying data is biased you have a problem. We are trying to rectify that, and make the data as blind and open as possible, but it’s a massive challenge.”

3. Implementing effective AI user controls 

Nearly 70% of businesses expect GenAI to have a highly significant impact on their productivity, according to EY. But 80% of firms acknowledge that their workforce has moderate or no expertise in deploying such AI systems. Controlling how employees access and use GenAI systems could be a major headache for many financial services firms.

“Some organizations are very hesitant about employees using anything connected with AI and they’ll try to block access. But people will find a way to use it and then employers won’t be in control. I think shadow IT might be a bigger risk than AI used responsibly in an organization,” says EY’s Wesdorp.

4. Defining fairness

While responsible AI and ethics teams all strive to promote fairness, understandings of the meaning and implications of that term vary. 

“I think the best an organization can do is to have a sort of beginning-to-end story of an ethical rationale that runs throughout their responsible AI process. So, if someone tries to make a claim that the organization has acted unfairly, you can confidently respond by saying: ‘No, that’s not what happened. Here’s the rationale behind what we do.’ You can then tackle the situation right away. If you don’t, things can get messy,” says Ferguson at Scotiabank.

5. Ensuring transparency

Half the banking executives polled during the online event believe greater transparency in AI-driven decisions should be a top priority for regulators. Financial services firms need to ensure the output of their AI systems is not only fair but can also be seen to be fair by their customers.

“Consumers are demanding that AI systems provide clear, understandable explanations for their decisions. We not only tell customers the reason behind decisions that affect them. We also explain to customers how they could act after that decision, to improve their financial situation, for example. Such explainable AI, which we call XAI, is very important to us,” says Itir Ürünay Aydoğan, Ecosystem Banking and Innovation Director at Yapi Kredi.

6. Adapting to changing regulations

Regulations governing the use of AI and the application of AI data inevitably lag the pace of technology innovation. Applying ongoing, and sometime opaque, regulatory requirements is often difficult. 

“We are currently waiting for secondary regulations to guide us as we implement the EU AI Act. The Act itself is more directive than pure regulation because it's full of principles,” says Intesa Sanpaolo’s Inverardi.

EY’s responsible AI framework

“Human-centric AI design focuses on using AI to help people have better ideas and make better decisions.” Itir Ürünay Aydoğan, Yapi Kredi.

To help organizations that striving to be responsible and ethical in their application of AI, EY has designed a Responsible AI framework that endorses a range of fundamental values and principles (See diagram). The framework requires the participation of subject specialists from across the organization. 

The framework is designed to enable organizations to implement AI systems faster, comply with evolving regulations and manage risk associated with AI. It seeks to ensure ethical conduct, legal and regulatory compliance and the protection of fundamental human rights while promoting principles such as accountability, fairness, transparency, security and data protection.

The Responsible AI framework requires a multidisciplinary team that should work together from the beginning of AI initiatives, says EY’s Wesdorp.

And the primary focus of responsible AI and ethics teams should always be what’s best for the customer, adds Aydoğan at Yapi Kredi’s.

“Human-centric AI design focuses on using AI to help people have better ideas and make better decisions. When coupled with ethical guidelines, it allows a wide range of stakeholders to ensure that AI services are fair and equitable.”

Five key trends in financial services AI adoption

1. Only 11% of leaders are ready for upcoming AI regulations: 86% do not have AI ethics frameworks or functions in place.

2. Financial Services firms are optimistic about AI but are slow to adopt: 90% of firms have integrated AI into their operations but many are still in the early phase of adoption.

3. Most AI implementations are in back-office operations: The introduction of AI in customer-facing applications is progressing more slowly.  

4. Skills gaps and the regulatory environment are key challenges to adoption: Other challenges include the speed of AI evolution, high cost of implementation, and the absence of governance frameworks.

5. ESG impact is largely overlooked: 51% have no plans or understanding about how to balance the extensive energy needs of AI systems with their ESG and Net Zero commitments

EY European Financial Services AI Survey 2024


Want to find out more? Watch the replay of the event

Digital Reinvention community

With Qorus memberships, you gain access to exclusive innovation best practices and tailored matchmaking opportunities with executives who share your challenges.

Related Content