Financial services firms need to tighten risk controls as AI usage climbs

Financial services firms across Europe are moving quickly to implement AI. Yet more than half the executives running those companies admit their organizations are not prepared for the risks AI will bring, according to EY.

18/11/2025 Perspective

Financial services firms across Europe are moving quickly to implement AI. Yet more than half the executives running those companies admit their organizations are not prepared for the risks AI will bring, according to EY.

Rapid advances in technology, combined with regulatory uncertainty, are hampering efforts by banks, insurers, and wealth management firms to manage their AI implementations securely.

The stakes are high. The European Union (EU) AI Act threatens fines up to €35 million or 7% of global turnover for non-compliance.

To explore how banks can improve risk controls and embed responsible AI and data ethics across their organizations, the Qorus Digital Reinvention Community and EY hosted an online event that featured three specialists from across the financial services industry. EY’s Bernadette Wesdorp was joined by Silvia Tessaro Trapani from Intesa Sanpaolo and Beyazit Karabulut from Akbank.

The speakers described how banks can build on their risk and governance frameworks to better accommodate the challenges of AI and enhance the resilience and performance of their organizations.

Key takeaways

  • 57% of financial services executives in Europe say their organization’s approach to risk is insufficient for AI.
  • 31% of financial services firms in Europe report gaps in their AI controls.
  • The European Union (EU) AI Act threatens fines of up to €35 million or 7% of global turnover for operators of non-compliant high-risk systems.
  • Human oversight across AI systems and processes is critical to establishing responsible AI.
  • Responsible AI strengthens trust among employees, business partners, and customers, and boosts business performance.

Check out the event highlights!

““Understanding what your organization is doing in terms of AI and its control is very, very important,”” Bernadette Wesdorp, Financial Services AI Leader at EY

Organizations will need to improve the oversight and control of their AI systems before August 2026, when the AI Act’s provisions for operators of high-risk systems come into effect, says Bernadette Wesdorp, Financial Services AI Leader at EY.

AI risk isn’t just a regulatory challenge. An AI misstep by a bank could quickly undermine client trust and damage its business.

EY research highlights executive AI concerns


EY research highlights concerns about AI among financial services executives in Europe. The research, which canvassed 410 executives in a dozen countries, found that 57% of financial services leaders are concerned that their organization's approach to technology-related risk is insufficient in the face of emerging AI technology. Furthermore, 31% of organizations have gaps in their AI controls and 30% have limited or no controls to ensure AI is free from bias.

Leaders across the financial services industry differ in their assessment of the likely consequences of AI. Executives in the wealth and asset management industry, for example, are most wary. More than half those surveyed by EY are concerned that AI might trigger job losses, manipulate consumers, or behave unpredictably without clear human oversight

By contrast, executives in banking and capital markets are most positive, with 65% confident that AI will make it easier to perform tasks that require academic or technical training. EY found that banks and capital markets firms have the strongest AI controls. 

Executives also worry that AI might weaken employees’ cognitive ability, blur accountability, or operate erratically when interacting with external AI systems. 

Despite limited risk controls and concerns among some executives about the effects of AI, most financial services firms are pushing ahead with their AI plans and quickly embracing advances in such technologies. More than a third of the leaders surveyed by EY say their firms are already using agentic AI (systems that can perform tasks independently) and a similar proportion plan to adopt those technologies in the next 12 months.

“"The human operator should be able to stop, override, or modify AI output,” ” Silvia Tessaro Trapani, Data and AI Ethicist at Intesa Sanpaolo

Responsible AI requires human oversight 

Financial services firms need to bolster their AI controls by strengthening the risk and governance frameworks already in place across their organizations and embedding human oversight into AI systems and processes, says EY’s Wesdorp.

IT, legal, data, compliance, and business teams should work together to establish responsible AI that supports the organization’s goals, she adds.

Human oversight across AI systems and processes is critical to establishing responsible AI, says Silvia Tessaro Trapani, Data and AI Ethicist at Intesa Sanpaolo. 

She points out that under the EU AI Act, AI providers and operators share responsibility for establishing human oversight frameworks for high-risk AI systems.

To ensure effective human oversight, AI operators such as financial services firms need to equip their employees with sufficient skills, knowledge, and authority, says Trapani. Overseers must understand an AI system’s role in its business or operational context, the legal ramifications of the decisions it makes, the accuracy and fairness thresholds it has to meet, and pitfalls such as automation bias and cognitive overload that can degrade output.

Trapani recommends that organizations tailor their risk frameworks to meet the specific requirements of each of their AI systems. She points out that by combining a preliminary fundamental rights impact assessment with ongoing human oversight, organizations can establish safeguards that are both preventive and remedial.

“The human overseer, if properly trained with the right technical capacity and appointed authority, is able to shield erroneous outputs before they create legal or ethical consequences.” 

“"The performance metrics always were a priority, but now we are trying to find the sweet spot using the fairness metrics and combining them with performance metrics," ” Beyazit Karabulut, Analytics Lab Vice President at Akbank

Financial services firms are stepping up AI training

Financial services firms recognize that AI training is critical. EY found that 88% of the companies it surveyed are investing moderate, significant, or extensive amounts in developing AI training. Furthermore, 84% are investing in testing and auditing of AI models, and 83% are spending on data access control.

Effective integration of human oversight not only mitigates AI risk but also strengthens trust among employees, business partners, and customers, and boosts business performance. Robust governance and oversight enable banks to scale AI applications quickly and effectively.

Beyazit Karabulut, Analytics Lab Vice President at Akbank in Turkey, says the bank has incorporated its core principles of transparency, accountability, sustainability, resilience, data privacy, and fairness into a manifesto that guides its application of AI. Akbank now measures fairness alongside business performance. 

He adds that Akbank is working to strengthen customer trust by ensuring greater transparency across its distributed AI systems.

“Now that we have processes and tools that can implement, we are going to scale them.”

EY’s Wesdorp points out that embedding responsible AI is far more than a technical or ethical challenge. The trust it creates adds measurable value to the organization. 

“Ultimately you want to create value but it’s important to do that in a trustworthy way.” 

Tougher regulations, including next year’s provisions in the EU AI Act, will compel financial services firms to tighten their AI security and governance frameworks. Banks that move quickly to embed responsible AI across their organizations will be well set to win the confidence of clients, attract new business, and generate strong returns on their AI investments.

Digital Reinvention community

With Qorus memberships, you gain access to exclusive innovation best practices and tailored matchmaking opportunities with executives who share your challenges.

Related Content