Who’s watching the robots?
Responsibility lies at the heart of ethics. When we are responsible for something, we are blameworthy and can be held to account. To be responsible, we must be aware of the situation we find ourselves in, understand it fully, and have sufficient control to allow us to shape outcomes. This seems simple and straightforward – if we are in control of a situation, we understand then we can be held responsible for the outcomes.
However, this quickly becomes more challenging in complex organisations or societies. It may not be clear what the situation is – I may have to work with all the known data around a given position, but, of course, that may not be the full picture. Alternatively, I may only control one given aspect of a situation. At this point, it can be difficult to ascertain whether I have sufficient command of my position to allow me to be held responsible. Indeed, it may well be that responsibility exists through complicity in actions – to the extent that we may all have one small impact on a given outcome, but we are all responsible through complicity. This leads to a challenging perspective that I cannot know the outcome I am creating, or that I am contributing to, how then can I be responsible and therefore blameworthy?
It is easy to see, therefore, positions emerging where responsibility can be hard to identify in day to day life. It is far more challenging in the digital world. We rightly see taking responsibility as a key moral decision, but when it comes to responsibility for decisions taken by AI we see a whole network of individuals involved and the further complexity of the sometimes unpredictable interactions of algorithms with data sets. If an unethical, perhaps biased outcome occurs, who is responsible? Is it the programmer? The manager of that area? Neither can easily be fully aware of all the possible outcomes of a given scenario, or even the inputs, so, in terms of the view or responsibility I gave above, neither can be seen as responsible. It is not tenable to suggest that no one is responsible for a given area or decision, likewise it is untenable to suggest that someone who is not in control of a situation can be held responsible. So, either we have to redefine responsibility, or we have to reconsider where responsibilities lie in organisations.
Of course, I don’t have the answer to this, but it is one of the key factors we must consider when looking at digital ethics.
As we move into the world of machine learning, it will become quite impossible for us to know exactly what is going on in those boxes, at which point we need to be very clear who is responsible for outcomes – what I do know is that it must not be, and arguably cannot be, an amoral robot.
For its part, the Chartered Banker Institute is keen to ensure customers are not viewed as mere ‘data collection points’ during the digital transformation of banking and to that effect the Institute has developed the Advanced Diploma in Banking and Leadership in a Digital Age to prepare future generations of bankers who, in addition to learning about technology and data, will be educated about the core banking skills of credit, risk, regulation, banking operations and professional ethics.
It’s essential the industry doesn’t just focus on the digital, the data and on the technology, which should underpin and not determine the service which is banking. The challenge is educating future generations of bankers at all levels, from service officers to chief executives, to understand what the deployed technology is, how the deployed technology works, how it arrives at the outcomes it does, what inputs it needs, and to be able to demonstrate that the outcomes are genuinely in customers’ best interests. In essence, we want to ensure the industry avoids bias or other ethical issues, that can creep into technology, and reconcile the advancement of technology with the restoration of trust.