Machine learning models are designed to predict event outcomes based on input data. As the scale of data increases, it becomes more and more difficult for humans to understand the logic behind a decision.
As research progresses, machine learning models become ever more complex, and play bigger roles in our financial lives, financial institutions will need to be able to honestly and transparently weigh the evidence behind any system’s predictions.
In this video, Dr David Sutton, explains what model explainability is, why it is important, and how the next generation of explainable AI needs to evolve to keep pace with the growth and evolution of models themselves.
Read more:
Deep learning and the new frontiers of model explainability
Path Integrals for the Attribution of Model Uncertainties
Share
About the speaker
David is Director of Innovation at Featurespace. He directs the company's Research and Development. In 2015, David transitioned from research astrophysicist at Cambridge University to commercial data science, when he joined Featurespace. He completed his DPhil in Astrophysics at Oxford University in 2010.