From black box to open playbook: Bringing transparency to AI in banking

in partnership with

Finacle Infosys

Logo of Finacle Infosys

Finacle is the industry-leading universal banking solution from EdgeVerve Systems, a wholly owned subsidiary of Infosys....

View more
Digital Reinvention
29/02/2024 Article
profile picture of K. R. Venkatraman

K. R. Venkatraman

Infosys Finacle

Head Product Architecture

Imagine yourself as a Premier League manager staring down a penalty shootout. You know your players’ strengths and weaknesses; you’ve analysed countless hours of footage; and you have a carefully crafted strategy. But when the pressure mounts and the final whistle blows, you can’t explain to your team or fans why you chose a particular penalty taker.

Unfortunately, this can become a reality if banks utilise AI models that are not “explainable”. Opaque AI algorithms, while generating results, can keep their decision-making shrouded in secrecy, creating distrust, hindering compliance and ultimately limiting their true potential.

The financial alchemy of AI

In recent years, AI has become synonymous with efficiency and accuracy, processing vast amounts of data at unprecedented speeds. At the heart of banking operations, AI algorithms are driving decision-making processes, automating routine tasks and even predicting market trends with what was once unimaginable precision. However, the success and trustworthiness of these systems hinge on one fundamental factor: explainability.

Explainability, or the ability to elucidate how AI arrives at a particular decision, is paramount in an industry in which accountability and transparency are non-negotiable. Traditional machine learning (ML) models have faced scrutiny due to their opaque nature, often called “black boxes”.

The “black box” dilemma

Yet the “black box” nature of these AI models remains, raising red flags for regulators and consumers alike. Think of a loan application denied without explanation, leaving the applicant bewildered and the bank vulnerable to accusations of bias. The customer, seeking answers, demands a reason for the decision.

Or picture a fraud-detection system raising alarms but unable to articulate the suspicious patterns it identified, hindering effective intervention.

The intricacies of neural networks and complex algorithms make it challenging for banks to provide a clear and understandable rationale. This lack of transparency not only raises ethical questions but also poses regulatory risks.

Generative AI (GenAI), equipped with deep learning (DL) capabilities, accentuates the black box dilemma. While these models offer sophistication, their inner workings are often unclear.

Demystifying GenAI

As we delve into the GenAI era, banking executives must demystify the technology fully and comprehend its implications. GenAI represents a leap forward from its predecessors, incorporating advanced techniques such as transformer-centric large language models (LLMs) with their unique proposition to create multimodal content. While the capabilities of GenAI are awe-inspiring, its adoption comes with unique challenges.

GenAI models’ complexities make them inherently less interpretable. To address this, banking executives must champion initiatives that prioritise not only the deployment of cutting-edge technologies but also the development of robust explainability frameworks. This involves investing in research and development (R&D) to create AI systems that balance sophistication with transparency.

Digital Reinvention
14/09/2023 Article

From co-pilot to co-worker: GenAI

Mitigating bias: The imperative of explainable AI

AI algorithms, like their human creators, can inherit the biases present in data. Explainable artificial intelligence (XAI) plays a crucial role in identifying and mitigating these biases. For instance, XAI can reveal factors in loan-denial algorithms that disproportionately impact specific demographics, prompting corrective actions to ensure equal access to financial services.

The lack of “explainability” (or adoption of XAI) is no longer an acceptable blind spot. The European Union’s (EU’s) recent AI Act underscores the growing need for transparency in algorithmic decision-making.

In a Morning Consult and IBM study involving senior business decision-makers, 78 percent of respondents globally said it is very or critically important that they can trust that their AI’s output is fair, safe and reliable. Eighty-three percent of global respondents said explaining how AI arrived at its decisions was important, highlighting the critical need for trust-building through explainability.

Consumers, too, are demanding clarity.

Fortunately, a new dawn is breaking. XAI techniques are emerging, peeling back the curtain on these once-opaque models.

XAI goes beyond simply explaining AI outputs. It empowers customers to understand the “why” behind decisions, creating trust and improved agency. What if you get a loan rejection not as a cryptic score but with clear explanations detailing factors such as income stability or credit history? This transparency builds trust, allowing customers to address potential issues and improve their financial standings.

Imagine a fraud-detection model that pinpoints suspicious language patterns or unusual transaction behaviours, empowering human analysts to take swift and targeted actions while providing valuable insights to the bank.

Explainability serves as a bridge between complex algorithms and human comprehension. It ensures that the decisions made by AI models align with regulatory requirements and ethical standards. Moreover, for an industry in which trust is the bedrock of client relationships, XAI instils confidence in customers and the community at large.

Some banks are already leading the way. J.P. Morgan Chase, for example, has its Explainable AI Center of Excellence (XAI COE) to bring researchers together to advance XAI objectives.

By embracing XAI, banks can unlock several benefits:

  • Building trust and transparency: With clear justifications behind AI-driven decisions, customers feel empowered and confident, building trust in the technology. For executives, XAI is crucial for maintaining customer trust, a pivotal advantage in today’s competitive landscape.

  • Ensuring regulatory compliance: Explainable models demonstrably meet stringent regulations around fairness and non-discrimination, mitigating the risk of penalties and reputational damage. In an era of heightened scrutiny, XAI becomes a shield, not a liability.

  • Optimising model performance: Unveiling the inner workings of these models allows for targeted debugging and improvement, leading to better decisions and, ultimately, stronger financial performance. So, you can continuously refine an AI-powered fraud-detection system to identify increasingly sophisticated threats, safeguarding precious resources.
Digital Reinvention
23/03/2023 Study

Innovation in Retail Banking 2023

Explainability in practice and what’s ahead

While XAI promises to shed light on complex algorithms, it’s not a magic wand. Like any technology, it has limitations. XAI can often explain “what” an algorithm did, not “why” it did it. The explanation may be technically accurate but lacks the deeper context and reasoning behind the decision. This may leave users confused or unsatisfied and perhaps have regulatory implications.

Despite its limits, it brings to mind something I read in Sir Alex Ferguson’s (the legendary former Manchester United football coach) book Leading: Learning from Life and My Years at Manchester United: “When you are in the football world, and I suspect in almost every other setting, you have to make decisions with the information at your disposal, rather than what you wish you might have.”

As Ferguson wrote, making decisions with the information at hand is important because a lack of explainability can breed suspicion and frustration, potentially driving customers away.

Transparency and explainability, though, are not one-size-fits-all solutions. Different stakeholders, including customers, regulators and internal teams, have varying levels of technical understanding. Thus, an effective explainability strategy should include tailored approaches for different audiences. This could involve developing user-friendly interfaces, implementing clear documentation and proactively communicating about AI initiatives.

As AI continues to evolve, its observability and explainability are bound to improve.

Much as a Premier League manager relies on a transparent playbook to guide his team to victory, banks must embrace XAI to navigate the intricate landscape of modern finance. XAI doesn’t just unlock the secrets of the “black box”; it illuminates the entire field, empowering informed decision-making and fostering trust with every play.

Consider a transfer window during which every scout’s report is scrutinised, revealing the data and reasoning behind each player’s valuation. This is the level of transparency XAI brings to loan approvals, fraud detection and risk management. Banks can explain their “transfers” with clarity, mitigating bias and enhancing trust with consumers and other stakeholders.

But just as a well-practiced free-kick routine crumbles under unforeseen pressure, XAI requires ongoing development and adaptation. Ethical considerations are paramount to ensuring fairness and preventing unintended consequences. Robust governance frameworks are the cornerstones of our banking stadium, upholding responsible AI practices.

This article was first published in the International Banker.

Related news & insights

08/04/2024 Article

Insurance Innovation of the Month: Caminhos by Bradesco Seguros

This month’s Qorus Insurance Innovation award goes to Bradesco Seguros for Caminhos, a unique mentoring program designed to support and...

08/04/2024 News

AXA and AWS team up to revolutionize global risk management

AXA, a leader in insurance and asset management, has partnered with Amazon Web Services (AWS) to develop the AXA Digital...

Digital Reinvention
02/04/2024 Interview

Fibabanka wins silver at Qorus Reinvention Awards – Europe 2024 with 'Alışgidiş (Get and Go)'

Turkish bank Fibabanka won silver in the Distribution category at the Qorus Reinvention Awards – Europe 2024 with ‘Alışgidiş (Get...

Digital Reinvention
01/04/2024 News

Akbank LAB unveils revamped innovation center for Entrepreneurship Week

Akbank recently launched its renovated innovation hub, Akbank LAB, as part of Entrepreneurship Week. Burcu Civelek Yüce, Deputy General Manager...

31/03/2024 News

SydeLabs: Addressing security challenges in Generative AI

SydeLabs, a startup specializing in AI security and risk management, has announced a $2.5 million seed funding round to bolster...

Digital Reinvention
30/03/2024 News

J.P. Morgan's wealth plan hits milestone, empowering financial control

Available for free within the Chase Mobile app and on, Wealth Plan serves as a digital money coach, offering...

28/03/2024 News

Clearcover unveils game-changing gen AI tool for rapid claims processing

Clearcover, a leading car insurance provider, has launched an innovative generative AI tool aimed at expediting claims processing and enhancing...

Digital Reinvention
27/03/2024 Interview

TBC Bank wins Silver Operational Efficiency award at Qorus Reinvention Awards – Europe 2024

TBC Bank earned the silver Operational Efficiency award for their TBC Innovation Fund and ‘Working Backwards’ Process at the Qorus...