Explainable AI: building confidence and compliance

FRA authors: Rim Belaoud, Manager, and Gerben Schreurs, Partner

In the financial sector, artificial intelligence (AI) and machine learning (ML) are now extensively used, from sophisticated trading algorithms and advanced customer service platforms to fraud detection models and compliance tools. However, as AI’s presence in finance grows, concerns about its transparency and accountability also intensify.

The move towards explainable AI, i.e. AI applications that users can understand and therefore trust, can address the confidence gap that might prevent an organization from the true benefits this technology has to offer. This article offers key considerations that corporate compliance teams and their technology or innovation colleagues should tackle collaboratively.

What is at stake with opaque AI systems

In the world of AI, algorithms can be described as either black box or white box. Black box algorithms, such as deep neural networks used in image recognition or natural language processing, produce results without revealing how they arrived at them, making their decision-making process opaque to users. On the other hand, white box algorithms, like decision trees or linear regression models, provide clear, interpretable explanations of how they reach conclusions based on the input data, offering transparency into their decision-making logic. Historically, white box models were favoured for their transparency and ease of interpretation. However, black box models are gaining popularity due to their ability to handle complex data and learn complex patterns within large datasets, providing in some cases more accurate predictions.

Implications for trust and adoption

The lack of transparency in black box algorithms complicates the auditing, validation, and verification of AI-driven outcomes, undermining trust among end-users, regulators, and customers. The AI act, for example, will provide the right to lodge a complaint with a national authority against high-risk use cases (e.g. creditworthiness evaluation) that are believed to cause damage or considered to infringe on the regulation of the AI Act. Financial institutions face the risk of applicants challenging the validity of a decision; for example, if a loan application is declined using an opaque AI model, it will be complicated to demonstrate that the decision was rational and fair.

Similarly, in fraud detection, when transactions are flagged as suspicious without clear explanations, it is challenging for investigators and compliance officers to validate these decisions or understand their rationale. Resulting delays in addressing potential fraud cases may lead to incorrect rejections of legitimate transactions, or larger investigations and enforcement where a genuine concern was dismissed.

This further exacerbates trust issues and regulatory compliance concerns. Crucially, the organization risks a vicious circle in which lack of trust impedes further adoption of AI and fosters reluctance to embrace positive change.

Ethical concerns

Black box algorithms in financial systems raise significant ethical and compliance concerns, particularly regarding potential discrimination against specific customer segments. Algorithms can inherit biases from developers’ cognitive bias or historical data used to train the models, leading to unfair scoring of transactions towards certain minorities, demographics or regions. For example, a fraud detection system trained on biased data may incorrectly flag transactions from ethnic groups as fraudulent. In addition to historical biases, algorithms can be influenced by sampling bias, algorithmic design choices, and feedback loops all of which contribute to potential disparities in decision-making.

Data security concerns

The lack of transparency in how algorithms handle data introduces security concerns, complicating the detection and mitigation of vulnerabilities. The opacity of black box algorithms heightens the risk of using sensitive data, which poses challenges for regulatory compliance, potentially resulting in breaches of data protection laws and regulations like GDPR.

How to achieve explainability

Whether your organization is at an early or advanced stage of incorporating AI into business processes, there are key themes your compliance and development teams can discuss to ensure all parties are confident in the systems deployed.

  1. Regularly assess data transparency

AI systems often analyse vast volumes of data from various systems. It is crucial to maintain comprehensive documentation of data sources, collection methods, and preprocessing steps to ensure full transparency. Data quality directly impacts AI model outputs, and poor quality or missing values can hinder explainability. Regular assessment of data quality, including bias detection, is essential to ensure explainability, fairness and reliability.

  • Determine the appropriate level of interpretability for each use case

Achieving explainable AI in the financial sector involves implementing strategies that address both local and global explainability challenges throughout the AI model lifecycle. Local explainability focuses on understanding individual predictions, while global explainability concerns the overall behaviour of the model and is often used in practice to explain the model to non-data science people who interact with it.

A safe starting point is to use interpretable white box machine learning models like decision trees and linear regression whenever possible, especially for high-risk use cases. These models provide local explainability by offering clear rules and coefficients that directly influence predictions. For example, in credit scoring, a decision tree can transparently show how factors such as income and credit history affect the decision to approve or deny a loan.

That said, there is sometimes a trade-off between explainability and accuracy. While white box models are more transparent by nature, black-box models often have higher predictive accuracy. Experts familiar with interpretability methods like LIME (Local Interpretable Model-agnostic Explanations) and SHAP (Shapley Additive exPlanations) can provide insight into complex models such as neural networks, enhancing both local and global explainability. LIME explains complex models by creating a simple, understandable model that approximates the behaviour of the original model for specific predictions. In contrast, SHAP shows how much each feature contributes to the final prediction.

However, achieving global explainability in complex AI models remains challenging. Deep neural networks, for instance, may offer high accuracy but lack transparency in their decision-making processes. Interpretability considerations must be integrated early in model development (and documentation) to carefully build in the appropriate level of explainability.

  • Encourage explainability by design

AI systems should be developed with inherent transparency and interpretability. By working with developers to prioritize explainability from the beginning, they can create more understandable models, choose interpretable algorithms suited to specific use cases and model performance, and implement user-friendly interfaces that make understanding and using the models easier.

  • Establish AI governance

AI governance is crucial for model explainability. It provides the foundation for the safe and ethical use of AI. A dedicated model risk management team and a robust model risk management framework are both essential in this process to set guidelines for reviewing and validating data sources, verifying model assumptions, and ensuring accuracy and explainability, which helps mitigate related risks. By setting up teams based on 3 lines of defence, with clearly defined development roles (e.g. AI development and implementation, monitoring, independent review and audit), organizations can identify and address biases, errors, and vulnerabilities early in the development cycle. Continuous monitoring of models through performance and accuracy review is also essential to understand how the model’s outputs are generated and ensure that models remain accurate and reliable in time.

  • Stay current

Technology will continue to progress, influenced by growing scrutiny, emerging regulations and user attitudes. New algorithms, methodologies, and tools continually enhance AI explainability, offering valuable insights into the decision-making processes of AI models. Keeping abreast of scientific advancements and user expectations is essential to steer your strategy in the right direction.  

spot_img
Ad Slider
Ad 1
Ad 2
Ad 3
Ad 4
Ad 5

Subscribe to our Newsletter