Who’s Watching the Watchers?

  • Giles Cuthbert
  • 30 November 2020
  • Blog | Fintech and Innovation in Banking | Blog

This is the fourth, and final commentary in which I’ve been looking at the guides Explaining Decisions made with AI published by the Information Commissioners Office (ICO) and the Alan Turing Institute.

All that has gone before, brings us to the important issue tackled by the final guide: What explaining AI means for your organisation.

Despite the fact this guide identifies the target reading audience as those in senior positions, it does offer something of broader value to the wider readership, and more specifically, what it means for you as a professional working alongside AI systems. As is stated within the guide, ‘anyone involved in the decision-making pipeline has a role to play in contributing to an explanation of a decision supported by an AI model’s result. This includes what we have called the AI development team, as well as those responsible for how decision-making is governed in your organisation.” As I’ve mentioned throughout my review of these guides – that ‘anyone’ could be you.

Let’s start at the beginning, which provides a useful checklist outlining what the end-to-end chain of what responsibility should look like; the roles involved, the types of policies, procedures and associated documentation expected. For those more technically minded, I would encourage you to review all the guidance. For those of you still needing to be convinced that this is something you need to care about; let me focus on what the guide says about people and their roles.

I’m going to quote directly from the guide as it is put so clearly:

“The roles discussed range from those involved in the initial decision to use an AI system to solve a problem and the teams building the system, to those using the output of the system to inform the final decision and those who govern how decision-making is done in your organisation.”

Almost immediately, the word ‘accountability’ springs to my mind. And, as soon as I think about accountability within the context of financial services, I think of the UK’s Financial Conduct Authority’s (FCA) Senior Managers and Certification Regime (SMCR). Whether a bank outsources its AI systems or develops them in-house, someone within the bank is directly accountable for them and for explaining the AI decisions made. However, please do remember that the SMCR is part of the Accountability Regime, which requires almost all those working in banks [and now financial services] to adhere to the Conduct Rules – more on this later.

First on the guide’s list of key players, is the Product Manager, the person who defines the product requirements for the AI system and determines how it should be managed throughout the AI system’s lifecycle. Whilst they are responsible for ensuring it is properly maintained, and that improvements are made where relevant – ultimately they may not be the accountable person (Senior Manager) in the eyes of a regulator, but that Senior Manager will be heavily reliant on their expertise.

Next is the ‘Implementer’. Where there is a ‘human in the loop’ (i.e. the decision is not fully automated) the implementer relies on the AI system to supplement or complete a task in their everyday role. Implementers need appropriate training – if not, it is very unlikely they will have the skills and knowledge to use the system as intended. If you are the implementer then you still have a responsibility, as mentioned in my previous article, to make sure that you get the training that experts in this field deem necessary. And if that doesn’t focus the mind, FCA Conduct Rules 1 [act with integrity], 2 [act with due care, skill and diligence], and 4 [fair treatment] in particular apply here. [I should, of course, highlight to members reading, that these are also embedded within our Chartered Banker Code of Conduct].

I’ll skip the overview of the compliance team, to bring us to the role of Senior Management, described in the guide as “the team with overall responsibility for ensuring the AI system that is developed and used within your organisation, or you procure from a third party, is appropriately explainable to the decision recipient”.  Clearly those in a Senior Manager Function, and by extension those responsible through a role as a Board member, must be able to explain and justify the use of AI systems. This in turn requires them to know where AI is being implemented across the business. And here lies a significant issue, and risk! In its report Overseeing AI: Governing artificial intelligence in Banking The Economist’s Intelligence Unit noted that: “…ensuring the right level of explainability is arguably banks’ toughest AI challenge.”

The same report highlighted the lack of skills at senior management and board level, and ensuring overall accountability lies with a human [and not a machine]. The two are, of course, not unrelated. And recently, at the first meeting of the Artificial Intelligence Public-Private Forum, co-chaired by the Bank of England and the FCA, concerns were also identified, with regard to the lack of skills and buy-in from both senior managers and business areas, not to mention a lack of data science skills at board level.

The use of AI within our sector is happening, and increasingly so. A modern banker may not always be an expert, but I hope that by raising awareness of these guides, you can see that it’s worth taking an  interest. perhaps to ensure that you’ve had the right training, to check if polices are in place, or maybe to put yourself in the customers’ shoes and ensure that the explanation is reasonable and clear. 

I am grateful to my colleague, Shona Matthews, Head of Regulation and Policy at the Institute for her regulatory insight, which has contributed to this article.