The ethics of AI

  • 17 September 2021
  • Blog | Fintech and Innovation in Banking | Thought Leadership Insights | Blog

As the use of artificial intelligence and machine learning increases by financial services firms, regulators and banks are getting to grips with complex questions around data bias, transparency and responsibility. While the legal framework is in development, organisations today nevertheless have ethical obligations to meet around quality of data, transparent decision-making and accountability. 

Artificial intelligence (AI) and machine learning (ML) are not new technologies for the financial services industry, but their growing adoption has prompted complex questions around data oversight and accountability. In a highly regulated industry, transparency, responsibility and liability in decision-making and data processing are paramount, but the functioning of AI and ML algorithms can be opaque, making the question of ultimate responsibility an open one. 

Nevertheless, the adoption of AI in banking services, from customer service and marketing to asset management, portfolio management, treasury and securities trading, continues apace. A 2019 survey from the Financial Conduct Authority (FCA) and the Bank of England (BoE) found that two-thirds of respondents in the UK were already using ML in some form and expected to double that use within the next two years. 

“In many cases, ML development has passed the initial phase, and is entering more mature stages of deployment,” according to the report. 

“From front-office to back-office, ML is now used across a range of business areas. ML is most commonly used in anti-money laundering (AML) and fraud detection as well as in customer-facing applications (e.g. customer services and marketing),” it states. “Some firms also use ML in areas such as credit risk management, trade pricing and execution, as well as general insurance pricing and underwriting.” 

In areas such as fraud detection and AML, AI is already augmenting or even replacing human decision-makers because the amount of data that needs to be processed makes AI systems indispensable. Human processing would simply be too inaccurate. As big data continues to expand and AI’s capabilities continue to develop, the likelihood is that the use of AI in financial services will also grow. 

Artificial intelligence: 

The study of how to produce machines that have some of the qualities that the human mind has, such as the ability to understand language, recognise pictures, solve problems and learn. 

Machine learning: 

The process of computers changing the way they carry out tasks by learning from new data, without a human being needing to give instructions in the form of a program. 

Source: Cambridge Dictionary 

“Responsibility is one of the core issues of AI ethics. The question of who is responsible for any piece of tech at the point of its use becomes very complex within organisations.” 

Giles Cuthbert, Chartered Banker Institute 

Challenges for AI in financial services 

The ethics of using AI and ML algorithms in every sector is coming under scrutiny, with three main areas of concern – accountability, transparency and data bias. These issues, however, are particularly thorny in the financial services arena, as a highly regulated industry with significant impact on people’s lives. 

“Responsibility is one of the core issues of AI ethics,” says Giles Cuthbert, Managing Director, Chartered Banker Institute. “The question of who is responsible for any piece of tech at the point of its use becomes very complex within organisations.” 

Financial institutions, for the most part, cannot pass on regulatory accountability to service providers. When they contract with a technology provider for an AI algorithm or an AI-based project, they need to ensure that they have oversight and can supervise what the tech firm is providing so they can comply with regulatory requirements. But as technology becomes more complex, this type of oversight becomes more difficult to obtain. 

Luke Scanlon, Head of Fintech Propositions, Pinsent Masons, explains: “There is a level of investigation that’s necessary and then the financial institution needs to get assurance through its contracts, which can be a difficult thing because, in many cases, we’re talking about a genuine collaboration. It involves the financial institution providing the data or labels for the data, and then the technology firm providing the algorithms of the system.” 

Tackling the opacity of complex systems 

The difficulty in interrogating the inner workings of an AI algorithm is also important in terms of transparency for the customer. Where an algorithm is making decisions on whether to grant a loan, or which assets to buy for a portfolio, customers first want to be aware if it is a machine that is making the decision, and secondly, how it arrived at that choice. 

“The regulation and guidance is still developing, but all of the discussion is that it won’t be acceptable just to say, ‘The computer says, no.’ That’s going to be unacceptable,” says Scanlon. 

“There’s a lot of thought around traceability of decisions, being able to trace that decision from where it came from and whether all the right steps were taken along the way. That’s the evidence that the financial institution should have in place, in terms of what they have to disclose. 

“There is a level of commercial sensitivity – so banks and technology providers are not likely to be required to share source code or other commercially sensitive details. But there has to be a balance in protecting commercial interests and disclosing how decisions are made.” 

That balance can be achieved by having professional bankers working closely with the AI systems, according to Cuthbert. 

“We will always need professionals who understand the central offerings of their financial organisation, who are trained in those areas, and who can identify flaws. And these skilled, trained and educated professionals should ideally be working alongside IT departments or tech providers. These technological functions are not divorced from the bank’s central functions, so we need people and technology to continue working very closely together.” 

The quality of the data 

In the early days of tech development, there was an idea that AI could be inherently fair, as machines couldn’t have an opinion about people. But time and again, AI algorithms have taken on the biases inherent in the data they learn from. In 2019, tech entrepreneur David Heinemeier Hansson went public on Twitter with the results when both he and his wife applied for Apple Cards. He alleged that he received a credit limit 20 times higher than his wife’s from the company’s algorithm, despite the fact that she had a higher credit score and they filed joint tax returns. The New York State Department of Financial Services subsequently opened an investigation into the credit card practices of Goldman Sachs, which partnered with Apple in the card venture. 

Data bias can result in AI algorithms simply coming to the wrong conclusions, or it can be more detrimental, such as when the AI upholds gender, racial or other discriminations. In order to ensure that data is usable, financial institutions need to test the quality of their datasets and ensure that it is a good representative sample rather than a limited group of samples. 

“Banks need to analyse any potential issues with the quality of their data and take steps to ensure that those risks are mitigated,” says Scanlon. “It’s not simply about data protection, GDPR [General Data Protection Regulation] and the usual processes. The controls need to go beyond that to meet what regulators are growing to expect of financial institutions.” 

The dataset is important, but there are a number of ways in which biases can be built into the AI system. If the directive of the system is framed incorrectly, this can also drive biased behaviour. For example, an AI system that’s assessing the creditworthiness of customers, but whose drive is to grow profits, may become predatory and unethical in its selections. The machine itself neither understands nor cares whether its outcomes are considered unethical or prejudiced, unless these considerations are framed in its programming. 

Responsibility, liability and accountability 

Who, then, is ultimately liable for when an AI algorithm goes wrong? This is a question that is perplexing regulators, firms and technology providers alike. The Institute’s Cuthbert uses the example of the driverless car to show the complexity of accountability. Many people expect that in the case of collisions caused by driverless cars, the driver will not be at fault, instead the buck will stop with the manufacturer of the vehicle. 

“Banks need to analyse any potential issues with the quality of their data and take steps to ensure that those risks are mitigated.” 

Luke Scanlon, Pinsent Masons 

“If I were driving a normal car and it was a manufacturing fault that caused it to crash into a wall, I would expect the manufacturer to be liable,” says Cuthbert. “But the critical point is, they’d be liable to me. I would have a liability to the owner of the wall, but the manufacturer may not, it is about causal chains.” 

He believes that banks and technology providers are in causal chains of liability, where the client may claim against the bank, and the bank, in turn, may have a case against the technology provider. However, even that chain can quickly become more complex. 

“In the end, it becomes about responsibility, but that’s not always clear. You may think, as a customer of the bank, the bank’s responsibility is clear. But then we start looking at things like Open Banking, where we may have used another channel to authorise a bank transaction and suddenly we have created this mesh of responsibility that becomes hard to disentangle,” he says. 

At present, it’s difficult to locate a supplier who’s willing to take on responsibility, according to Pinsent Masons’ Scanlon. 

“There are key points here around getting the supplier to stand behind all aspects of the system and its technology. But then we have to look at what the financial institution is doing with the technology, the data it’s using and the decisions it’s making. It’s difficult to get a provider who’s going to accept liability for that and there’s also still a question about whether they should or shouldn’t,” he says. 

How banks manage AI risks 

AI doesn’t have to be a black box, impervious to analysis and accountability. But the systems for oversight are still evolving. So-called Explainable AI (XAI) covers systems and tools designed to increase the transparency of the AI process for humans. While this technology is still in its infancy, stakeholders from the Defense Advanced Research Projects Agency (DARPA) to Google are involved in research and development in this space. 

Today, firms use a number of mechanisms to mitigate risks in AI and ML, the most common of which are alert systems and human-in-the-loop processes. Alert systems are designed to flag unusual or unexpected actions to employees for further analysis. With humanin-the-loop processes, decisions that are made by the AI are executed only after they have been approved by a human. Both of these systems have limitations, but they are key tools for risk mitigation. 

The FCA’s Senior Managers and Certification Regime focuses attention on individual accountability, so through this lens the senior managers responsible for data and technology would become accountable for the projects and technologies they adopt. But the Institute’s Cuthbert believes that individual responsibility may be hard to identify in AI systems. 

“The difficulty that really comes home to roost is that we are bringing in external expertise because these individuals can do things and understand things which are outside our area of knowledge. So there’s almost an inherent contradiction in saying that we need to be able to, or ever can, understand what those people do,” he says. 

“The institution must make best efforts to ensure that staff are qualified. But I think when we look at responsibility maps, when we look at the Senior Managers Regime, it’s all about personal responsibility. 

“But in AI, I think distributed responsibility would be more appropriate. Where we can assess how responsibility for certain things can be distributed. 

“We start to get into some really deep, interesting questions about the nature of our companies and the nature of organisations. Do we need to be structuring ourselves differently to ensure that responsibility is dealt with correctly and fairly, rather than looking to the traditions of the past – our own personal responsibility? Maybe that needs to change.” 

The regulatory response 

Regulators are exploring how to respond to the adoption of AI and ML in financial services, but so far, only best practice and guidance has been forthcoming. The FCA and the BoE launched the Artificial Intelligence Public- Private Forum (AIPPF) in 2020, to help better understand the impact of AI and ML on financial services. During its latest meeting in February 2021, Dave Ramsden, Deputy Governor for Markets and Banking, BoE, said that both the bank and the FCA were proceeding with their consultation on the transformation of data collection over the next decade. 

In a recent letter to CEOs, the bodies called on companies to work with them in partnership to tackle the growing challenges in data collection, including how to ensure data quality and remove legacy data, processes and technologies in order to streamline reporting. They propose tackling these challenges through a work programme that delivers on three critical areas of reform: 

  • Integrating reporting – increasing consistency in designing and delivering collections for value, reuse and efficiency 

 

  • Modernising reporting instructions – rethinking the way reporting requirements are designed and expressed, and improving the ease with which they can be interpreted and implemented by firms 

 

  • Defining and adopting common data standards – standardising how financial data is described, defined and sourced at an operational level within firms. 

 

Meanwhile, the AIPPF continues to hold discussions on data, including its quality, standards and regulation, model risk management, and AI governance. 

“We want consumers to benefit from digital innovation, and competition. This includes data-based and algorithmic innovation,” said the FCA at the time of the AIPPF launch. 

“Consumers need to have the confidence that they are getting fair access, price and quality, and that firms act in their best interest. Where decisions are taken by financial services firms using data-based or algorithmic methods, we need to make sure those decisions are transparent, fair and secure, and that the data is used ethically. We also need to understand the impact firms’ decisions can have on different consumer groups, such as the most vulnerable. 

“The opportunities for innovation also apply to the way we regulate and engage externally, and we will continue work in this area to understand how these advances in technology may be used for our regulatory purposes.” 

How digitisation is driving AI adoption 

While regulations are still in development, however, the pace of technological change means that financial institutions today can’t afford to wait. 

“We can’t just sit back and wait from a legal or regulatory perspective, because in the history of technological change, the legal framework has tended to lag. Particularly coming out of the pandemic, with more reliance on digital engagement and automation, there are obviously cost savings and efficiencies in using AI for better decision-making,” says Pinsent Masons’ Scanlon. “These decisions have to be made on a commercial basis.” 

For Cuthbert, this is exactly where professional bankers come in. There needs to be a balance of the ethical and professional with the commercial, and professional bankers fill that gap. They are there to protect and champion customers, while adopting the technologies that will give them access to the best services. 

Recommended reading: the ICO [Information Commissioner's Office] and The Alan Turing Institute have published guidance to help explain the decisions made or supported by AI. Find out more at: https://bit.ly/3whvqFb