It’s a dark afternoon in mid-November. A volcanic ash cloud is visible on the horizon, and twilight has already set in over Reykjavik.  

A man at the airport fumbles through his credit cards. He has been waiting in line for two hours to buy a plane ticket out of Iceland. He is finally at the ticket desk, trying to book the next available flight back to the European mainland — to Frankfurt, to Paris, to wherever.  

Thousands of travelers are hurrying to leave the island before the airport shuts down. When the ash cloud arrives, it could ground flights for days, maybe weeks. 

“I’m sorry, sir,” the ticket agent repeats. “This card has been denied, as well. I am afraid you will need to contact your bank to arrange payment.” 

It’s too late to contact the bank, the man knows. In the hours it will take to sort out his bank and then wait in line again for a ticket, airlines will have begun to cancel flights. He’s stuck. 

Back home, his card issuer might investigate why the customer’s account was flagged. Perhaps there had been suspicious activity. Perhaps it was a false positive. Either way, that customer is going to want to know why his cards were declined at such a critical moment.  

And if the bank can only answer, “Because the system flagged it,” then that bank has a big problem on its hands. 

In banks today, the systems that flag accounts are built around machine learning models. These models are incredibly complex and much better than humans at detecting things like suspicious or fraudulent transactions. 

But we live in a human world. “Because the system flagged it” doesn’t cut it when a customer cannot access their money when they need it. People will rightly demand explanations when a system halts them from doing whatever it is they are trying to do. 

This is where model explainability comes in. In this article, David Sutton from Featurespace’s data science team explores what explainability is, why it matters to financial institutions and how Featurespace researchers are pushing its boundaries forward. 

What is model explainability? 

Machine learning models are designed to predict events and outcomes based on input data. When a model has millions of input data points — plus the billions of ways those data points can interact — it becomes impossible for a human brain to comprehend the model’s calculations. 

Model explainability is concerned with giving humans the tools to understand how a model arrives at a certain conclusion.  

This solves the “Because the system flagged it” problem. Instead, the bank’s team could tell a customer, “Our system spotted a pattern of suspicious transactions, and it flagged your account as having possibly been accessed by an unauthorized person.” 

That example describes the current generation of explainable AI. This is the kind of capability any financial institution should have right now. 

So, what’s the problem? 

Models can be wrong. Systems can misidentify fraud as genuine behavior, and they can flag legitimate transactions as potentially fraudulent. And as models grow in their complexity, the tools for explaining their predictions must evolve. 

So, the next generation of explainable AI needs to focus on interrogating the evidence that supports a model’s conclusion. It needs to be able to quantify the uncertainty in the output of a machine learning model and find what aspects of the input data caused the model to be uncertain. 

What would better model explanations look like? 

In July 2021, Featurespace’s Iker Perez, Piotr Skalski, Alec Barns-Graham, Jason Wong and I published a paper describing a novel framework they had developed for attributing uncertainties in machine learning models.  

In the paper, we outlined a method for making complex models more explainable by probing the input data for sources of uncertainty. 

This is something our own brains do when we investigate questionable information. For example, imagine you’ve had a pleasant chat with a stranger on a Friday night out. At the end of the chat, they hand you their handwritten phone number on a cocktail napkin. To your dismay, the handwriting is barely legible. You cannot tell the fours from the nines. So, you call your friend over to help you investigate.  

“That’s clearly a diagonal line,” your friend says. “I’m pretty sure that’s a four.” 

“No, no,” you respond. “Look at the rounding there in the top left. That’s a nine.” 

Now, imagine a record of financial transactions rather than a written phone number. Imagine this record comes from a bank account that has been flagged because of suspicious transactions. Heat maps overlay the records and show which raw data points the model thinks represent out-of-character behavior. From there, the human operator investigating the case can construct their own meaningful, comprehensible narrative that explains the model’s prediction.  

This kind of data-level investigation represents a major improvement in model explainability. It’s at the bleeding edge even for academic researchers in data science.  

Most models today rely on reason codes, the same things you see in credit reports and on medical bills that describe why a value is calculated as such. Explainability within this framework involves curating evidence as to why a model predicted a specific outcome. According to Reason Code A, Reason Code B, and Reason Code C, the explanation would go, the model has concluded Outcome X.  

It’s the way a trial lawyer explains a case — which means only one side of the argument gets represented. The innovation here is akin to introducing an expert witness who can be cross-examined. That witness can speak to the model’s certainty and its uncertainty, which provides a balanced and more complete understanding of what the model is really thinking. 

Why do financial institutions need this level of explainability? 

As machine learning plays a bigger and bigger role in everyone’s financial lives, financial institutions will need to be able to honestly and transparently weigh the evidence behind any system’s predictions.  

There are three major reasons why: 

  • Compliance. Recent legislation in the UK, France, the United States and elsewhere has introduced into those country’s legal systems the idea that people are owed an explanation for machine-driven decisions made about them — say when a person’s mortgage application is denied because of a machine learning model’s prediction. Explainable AI will help financial institutions stay on the right side of compliance in the coming years. 
  • Customer loyalty. Imagine being denied a plane ticket purchase in a desperate moment, or being denied a mortgage when you feel your finances are in order. If your bank responds with an opaque “Sorry” in either instance, you’ll not likely be a customer of that bank for much longer. 
  • Fairness. There have been high-profile stories in recent years of companies’ hiring algorithms discriminating against candidates, particularly women and ethnic minorities. People are increasingly calling for transparency from the institutions they depend on because they want to know they are being treated fairly in the eyes of the model. 

This is why our team has been researching novel methods of explainability to demystify deep learning networks such as our Automated Deep Behavioral Networks. To learn more about our research into model explainability and uncertainty attribution, have a look at our Explainability whitepaper.