Machine learning models are designed to predict event outcomes based on input data. As the scale of data increases, it becomes more and more difficult for humans to understand the logic behind a decision.

As research progresses, machine learning models become ever more complex, and play bigger roles in our financial lives, financial institutions will need to be able to honestly and transparently weigh the evidence behind any system’s predictions.

In this video, Dr David Sutton, explains what model explainability is, why it is important, and how the next generation of explainable AI needs to evolve to keep pace with the growth and evolution of models themselves.

Read more:
Deep learning and the new frontiers of model explainability
Path Integrals for the Attribution of Model Uncertainties