The Wolfsberg Group – an association of 13 global banks – published its Principles for Using Artificial Intelligence and Machine Learning in Financial Crime Compliance, outlining its support for leveraging Artificial Intelligence (AI) and Machine Learning (ML) in Financial Institutions (FIs).
We know that AI and ML can be used to better detect, investigate, and manage financial crime risk. But for these solutions to be considered responsible, and to meet evolving regulatory expectations regarding innovation and the responsible use of AI, they must comply with mandates on fairness, efficacy, explainability, and proportional usage of protected data.
The Wolfsberg Group principles outline the approach of its banking members to ethical and responsible use of AI for the purposes of Anti Money Laundering (AML) and financial crime programs. And whilst the principles are sound, there is often a lag between agreed principles and successful implementation within FIs. This lag in adoption can be caused by knowledge, resource, or technology demands, but it is the root cause of latency within the financial services industry in terms of successful application of AI and ML technologies for financial crime management.
So how can FIs turn the five principles into action, enabling innovation efforts?
Legitimate Purpose
Data has been referred to as ‘the new gold’ for some time, and whilst it is incredibly valuable in developing ML models that can better identify financial crime patterns there are significant requirements to consider in terms of the volume and type of data used for both the model and ongoing transaction monitoring. Regulatory guidance often outlines the concept of legitimate purpose for the data. In practice within FIs, this can be translated into cleansing activities on historic data in terms of capturing evidence and decisioning. The goal is to uncover historic bias which could be introduced into ML models. These biases could include the raising of ‘defensive’ Suspicious Activity Reports (SARs), where FIs apply a ‘better safe than sorry’ approach to reporting for law enforcement. Additionally, risk perspectives can change over time or can be subjective in nature, so consistently applying the same “risk lens” to how previous outcomes were designated is important.
If these historical biases were to be used as training data for a model, they could perpetuate this misinformation and create unintended outcomes or ineffective results. Data sets which contain bias from demographic profiles could introduce unfairness which could reveal itself in fraud prevention programs for example and be particularly detrimental towards marginalized or vulnerable populations who find limits placed on their ability to transact or where they are geographically able to conduct business. This can be overcome with standardization of review practices across programs which should also be designed to capture the context of why outcomes are ‘good’ vs ‘bad’ or why an event presents risk or not.
How to integrate an assessment of ethical and operational risks into their risk governance approach
It is crucial to comprehensively review ethical and operational risk, as well as addressing any potential bias in data; particularly where operations are highly sequenced across disparate teams or where operational pressures could negatively influence outcomes such as investigator inexperience, cost cutting mandates or mandated production goals
- Ensure the use of the data underlying model features and development conforms to privacy and data requirements
- Align the conceptual design of the model with measurable outcomes and ensure its use in line with the risks identified in programs’ risk assessments
- Identify that data usage supports effective outcomes within the focus areas of programs and regulatory priorities
- Define and implement an AI/ML risk framework which identifies and measures AI/ML risks as they relate to financial crime, include definitions and descriptions of the underlying controls which have been implemented to manage these risks. E.g., accuracy, bias, transparency, performance, privacy
- Develop comprehensive documentation supporting the design, theory, and logic of the approach and development of the model, to demonstrate the legitimacy of the purpose and proportionate use
Ethical mining of KYC and AML data
Appropriate use of technology is a core part of ensuring legitimate purpose. When we return to the concept of data as gold, there is often a desire within other operational areas of the institution, such as business development and marketing, to mine this data for further value. This could be a risky approach when placed in the regulatory concept of legitimate purpose; improving identification and reporting of suspected financial crime is a legitimate purpose but using Know Your Customer (KYC) or AML data to generate insights for marketing segmentation and marketing could contravene most regulatory definitions of legitimate purpose. And may even be contrary to an institution’s own data protection policies or a customer’s understanding of how their personal data is used.
Proportionate Use
Data protection regulations in jurisdictions around the world mandate that any data used, even in the pursuit of financial crime, must be proportional. In basic terms, this would mean only collecting and analyzing data essential to the monitoring of financial crime for that purpose. Proportion can be calculated by the breadth of data points, or even the time-series. And while it is important to consider proportionate usage from a compliance perspective, it is equally important from a cost efficiency point of view. It is about balancing the benefits of technology with the potential risk and additional effort of managing larger data sets which are not providing actionable insights.
The Wolfsberg Group also nestles margin for error under the heading of proportionate use. Flagging the delicate balance between feeding systems sufficient data for accurate pattern analysis and restricting data within the concept of proportionate use for financial crimes compliance.
Appropriate use of AI and ML
AI and ML are not appropriate in all scenarios. It is crucial that organizations honestly evaluate where these technologies can complement existing risk strategies and consider both data availability as well as organizational readiness for some machine learning models such as a neural network model.
There is no one-size-fits-all when it comes to ML techniques, and FIs should avoid vendors who blindly champion one technique over another. Supervised, unsupervised, and other techniques all have their place depending on the particulars of the organization and the use case. And ML is not all or nothing: a detection program, reliant solely on machine learning models may not be the most appropriate yet for an organization. New ML opens up the possibility of a whole host of advanced analytical methods, but this doesn’t have to be instead of existing rules, and organizations considering a first implementation can look for simpler ML use cases, including:
- Prioritize and manage rule-generated alerts
- Targeted analytical approaches or specialist scores for typologies badly served by rules, e.g. structuring
- Additional coverage for emerging risk through complementary monitoring
- Optimize rule thresholds and derive rule expressions
- Profiling rules for use in dynamic thresholds
Calculating the margin of error
The Wolfsberg Group’s principles rightly highlight the need to assess the margin of error within AI/ML solutions. There is not an existing industry standard to compare manual and human margins of error against AI/ML systems. Although these technologies can be more accurate than historical approaches, it is unreasonable to expect that they would have a zero rate of error. The margin of error for technologies should be managed and mitigated in the same way as manual and human errors. Defined, documented and risk appropriate indicators should be leveraged to assess and understand outcomes impacting model effectiveness.
Design and Technical Expertise
The Wolfsberg Group recommendations center on ensuring understanding of any technology implemented and creating that understanding through processes and people. This is with a view to managing both bias and risk. Education and understanding of novel technologies are key. Over the last few years, we see the demystification of AI/ML and more earnest conversation centered on when and how it can be used, not if. But there is more to be done.
There remains a gap in understanding of the effort around models/AI and their use in financial crime. While AI/ML is already used in many FIs, this is often outside of financial crime units and in areas such as credit or marketing. The positive is that this means model risk management and model validation approaches exist within the organization. Unfortunately, those teams don’t tend to understand financial crime programs and their compliance or risk complexities. This is particularly an issue in smaller FIs and unregulated fintechs. Fraud teams have had higher rates of adoption of ML but have historically not experienced a focused level of heightened governance over model implementation and use.
Some financial crime teams tend to have less exposure to advanced data science practices and AI/ML techniques than counterparts elsewhere in the organization, with most usage centering on process automation or Management Information (MI) reports. This undoubtedly is a factor in the high number of attempted in-house AI/AML projects that fail. The lack of internal expertise plays out as machine learning operations and the models themselves not passing validation. The aftermath of this is that there is a misguided impression that the technology has failed.
This lack of specific model risk management expertise for financial crime extends to the regulatory environment. Regulatory clarity and understanding are important to successful adoption, but many frameworks have not been formulated specifically for AML transaction monitoring, but other risk scoring such as credit. SR 11-7: Guidance on Model Risk Management and OCC 2011-12: Supervisory Guidance On Model Risk Management from US regulators are such examples, which are currently being re-evaluated for update. Current frameworks support strong governance frameworks but are perceived to be too cumbersome to support the agility and effectiveness needed to combat constantly evolving financial crime risks.
Partnering for model governance
Without in-house expertise, FIs need a partnership with experts in financial crime analytics. Experts can help FIs to understand their own data and risk appetite and identify a solution that appropriately meets requirements.
The key to a successful partnership will be a clear plan for implementation and success criteria based on reasonable, measurable metrics. For example, is a 95 percent reduction in false positive realistic in all use cases? In this way financial crime teams can overcome the boardroom hype for AI/ML and design a technical solution that delivers tangible results.
Accountability and Oversight
The Wolfsberg Group principles rightly emphasize the responsibility of the FI for their use of AI and ML, whether systems are developed in-house or sourced externally. For FIs to take accountability, they need vendor solutions that provide transparency on model performance, retunes, and updates of the models. Regular check-ins and governance processes are essential. Ensuring oversight using approaches such as ‘human in the loop’ or even ‘Compliance in the loop’ across all phases is a strong starting point. From model development to implementation, AI/ML use should be actively managed with awareness and engagement of stakeholders in the outcomes.
The principles also drill down on the ethical use of data in AI/ML as part of accountability. It is with the FI to define and maintain its approach to ethical AI which may be through existing risk or data management. The specific call out to “establish processes to challenge their technical teams whenever necessary and probe the use of data within their organizations” is an excellent recommendation. These frameworks should require and incentivize an “effective challenge” of the use of AI/ML.
Financial crime teams would do well to take a more active role in data strategies. Some FIs have been to position a Head of Data in financial crime compliance and have dedicated analytics resources embedded within teams responsible for fighting financial crime but it is too few in the industry.
Openness and Transparency
It is notable that The Wolfsberg Group is cognizant of the challenges of transparency. Too much open data sharing can help not hinder the criminals, and even when it could aid AML programs it may breach regulatory requirements around reporting confidentiality and tipping off, and data protection obligations.
A focus on transparency should already feed into the selection of AI/ML techniques used by FIs and their selection of external providers. Unlike the black boxes of old, ML models should be a ‘white box’ providing transparency in their inputs:
- Exactly how data feeds are used and how data lineage is guaranteed
- Which signals the model is considering, and ensuring those are humanly readable and understandable by FI stakeholders
- Evidencing how algorithm selection is in line with model governance and the FI’s risk appetite
- Extensive model documentation must be provided
Perhaps the most important workstream in pursuit of openness and transparency is with stakeholders, spending the time to ensure they can deeply understand what each model does, how it works, and the associated risks. Explainability requirements in upcoming regulations will center on this idea of the business and risk owner for financial crime being to interpret and explain AI and ML in production in their programs.
Braver banks adopting AI
Only the brave banks seem to be innovating with AI when it comes to financial crime compliance in traditional financial services. Fintechs, neobanks, and payment service providers who are leading efforts to advance the use of ML in fraud and anti-money laundering programs can also strengthen their governance programs by applying the five principles. It is positive to see the banks that form The Wolfsberg Group creating confidence in the use of AI and ML by FIs to improve their financial crime compliance programs.
The benefits for detection (achieving more effective detection, and a holistic customer and transaction view) coupled with the benefits for operations (reducing manual reviews and developing a risk-based approach to review) and compounded by the benefits for customers (less friction and no redundant queries) add up to a compelling business case.
3 pillars for realizing value from AI/ML in FCC
The principles from The Wolfsberg Group recognize that realizing benefits of these new technologies requires a comprehensive strategy including:
- A robust control framework
- Adequate understanding of AI and ML across the team
- A clear data strategy
Many of the benefits of advanced analytics depend on a data strategy that brings together the necessary information from the various siloes and sources across the organization on which to perform analytics. In a previous publication, Digital Customer Lifecycle Risk Management the Wolfsberg Group outlined how some digital attributes such as IP address and device info, which have historically been captured at onboarding for fraud prevention purposes, should also feed into holistic customer profiles for financial crime analytics. Designing a collective data and advanced analytics strategy for the entire organization is the path to delivering on the promise of ML in AML.
Enabling trust and transparency will be key in establishing an AI/ML governance and operations program. The high-level principles outlined by the Wolfsberg Group are a good foundation, but the challenge is how to pragmatically develop a program focused on fair, effective, and explainable outcomes while remaining responsive to emerging financial crime risks.
Discover the Featurespace approach to Model Governance for Anti-Money Laundering
Share