June 2019

Artificial Intelligence, or AI, brings the promise of valuable business intelligence from faster data analytics, but only if it can be trusted – making governance and data quality crucial components. David Gleason and Polina Evstifeeva review where controls and responsibility have got to so far, and where regulation and standards can help

While the concept of AI may conjure images of a futuristic world with self-aware and powerful machines outperforming humans across a wide range of disciplines, the reality is that AI is already here, whether it be advising on when to take a turn in the road or recommending the best new Netflix series to suit your taste. Yet, while its use today may be more mundane than those futuristic visions, its impact may be just as astounding – McKinsey predicts that AI will deliver additional economic output of around US$13trn by 2030 – a 1.2% yearly boost to global GDP.1

Across many industries, widespread adoption of AI has the potential to enhance operations and evolve business models. Turning to global transaction banking, AI can drive:

  • Greater payments processing efficiency;
  • More accurate prediction of issues within the physical and financial supply chain of trade transactions;
  • Increasingly efficient risk management (from assessing credit risk to identifying employees’ misbehaviour); and
  • More efficient anti-money laundering (AML) and know your customer (KYC) processes.

This should mean a better experience. McKinsey estimates a boost for “front runners” adopting AI of around 6% in additional annual net cash flow growth over the next 10 years. While we may delegate certain operational processes to AI in banking, we cannot delegate responsibility for it being used in a trustworthy way. This means that the use of AI must be secure, transparent and ethical. How AI works – in terms of how it treats data, and how the risks of unintentional introduction of bias and data breaches are mitigated, for instance – is as important as what it produces in terms of usable information.

Some of these elements are already governed by broader regulations, such as those pertaining to data privacy. Yet assessing the specific risks associated with AI, its control principles and the question of ethics are hot topics for regulators globally. Crucially, AI, as with most technologies, is not limited by borders. It is important therefore that initiatives seeking to monitor and improve its uses and outputs – whether led by industry or regulators – adopt a similarly global approach.

"The use of AI must be secure, transparent and ethical"

Unintentional bias

The power of AI lies in its ability to imitate, and then enhance, human decision-making. The danger of this is that humans hold often unknown prejudices – something that can be amplified by the use of AI. Such prejudice can be introduced at each stage of AI development (see Figure 1), from model development to data preparation to model training.

An AI model can also be made biased by errors in data entry. A data training set not only needs to be broad enough to be complete and representative, but must also not provide data that will lead the model to make incorrect assumptions. Otherwise it brings the risk of ‘overfitting’, in which the model unintentionally recreates biases and discrimination from past data to the extent that it negatively impacts performance.

US$13trn

Predicted additional global economic output AI will deliver by 2030
(McKinsey)

One example: let’s imagine taking a data set comprising 30,000 loans, with 20,000 of those performing as expected and 10,000 defaulting. If, when selecting our data set, we pick an unrepresentative number of defaults from a certain region, we may be unwittingly training the model that geography is a significant factor in driving defaults. This may create unwanted prejudice and restrict individuals or companies from certain regions successfully obtaining loans in the future.

Further, as we test the model and assess the results, the data will often require tweaking – another stage in which we may introduce bias. For instance, if these were personal loans, we may start by having individuals in our data set organised within age bands. We may then decide to add a category for work experience. Unwittingly, by doing this we have made our model more sensitive to age than it should be, given the high correlation between work experience and age. There are lots of similar ways you can accidentally overfit an AI model by inputting highly correlated variables.

Checks and balances

Many banks are moving away from expensive proprietary technology development towards the use of applications available as a service through the cloud. Many of these applications will include embedded elements of AI – indeed, the CEO of Oracle, Mark Hurd, predicts that AI will be included in every cloud application by 2025 – sometimes perhaps even unbeknown to the end-user.2

This proliferation of reusable open-source AI model frameworks, many embedded within applications, reduces the need for ground-up highly technical model development. This lowers the barrier to entry for all market participants, including those who may not be particularly aware of AI and its risks.

As best practice, AI outputs should always be assessed against experience and expertise, as well as continuously monitored – what starts out performing well can degrade as real inputs evolve. While the ‘four eyes principle’ – which requires two individuals to approve an action before its undertaking – may be considered antiquated in future, two eyes will likely be required, at least in the short term. If we blindly follow the results of AI, we are abdicating our obligations to exercise (human) care and due diligence over our business processes.

"If we blindly follow the results of AI, we are abdicating our obligations to exercise (human) care and due diligence over our business processes"

That said, it is right and proper that the users of AI systems are ultimately responsible for their use and output, irrelevant of the fact that they may outsource some of the development. A good analogy here is the aviation industry. An airline operator will have a long and complex extended supply chain, with different manufacturers responsible for the engine, fuselage, landing gear and wings. But at the end of the day, each airline is responsible for monitoring, inspecting and testing its airplanes. After all, only the airline knows that the plane recently flew in bad conditions, struck an object, or is showing unusual performance indicators. AI in banking is no different: it is only banks that know for what purpose the model is being used, the conditions it is under, and how it is performing. It is therefore reasonable that they assume full responsibility.

Existing regulation pertaining to AI

AI lives and dies by the quality and volume of data fed into the system during training. Quite simply, larger and more diverse data sets mean better results. This in itself is not a problem. But what is concerning is that this requirement for huge amounts of data may increase the appetite for its collection which may, in some cases, be completed in a way that customers or businesses may not understand or anticipate. Press reports have covered a steady stream of smartphone applications that harvest data in unexpected ways (geolocation data, screen captures, camera and sensor data, etc.), driven in many cases by this insatiable demand for data.3

"Larger and more diverse data sets mean better results"

Of course, AI already falls under current data protection regulations (with GDPR being one example). But regulators across various jurisdictions have recently taken a particular interest in identifying potential risks and impacts specifically brought about by the increased use of AI across a vast array of industries, including financial services. This includes consideration of the ethical use of AI, as well as the design, governance and supervisory implications from its use.

Ethical and robust

In particular, the European Commission’s High-Level Expert Group on AI (EC HLEG)4 has been mandated to come up with Ethics Guidelines for Trustworthy AI5 (published in April 2019) and AI policy and investment recommendations (expected in May 2019). The regulators in Singapore are heading in the same direction,6 setting out frameworks and guidelines for creating a conducive environment that supports and expands the adoption of AI and data analytics.

While the EC HLEG guidelines are voluntary, we can expect them to inform suggestions on policy and regulation to the EC as part of the HLEG’s broader policy work. Moreover, with the guidelines being sector-agnostic, a tailored approach will be required depending on the context in which AI is applied.

The EC HLEG on AI suggests that the trustworthy use of AI should be underpinned by three components:

  • Lawful;
  • Ethical; and
  • Robust.

The EC HLEG notes that each element “is necessary but not sufficient” to achieve trustworthy AI, and that, ideally, all three components should work in harmony. While the guidelines do not explicitly deal with the lawful component, they do offer guidance on fostering and securing ethical and robust AI. But how do we define ethical purpose? A good starting point is the work conducted by the European Group on Ethics in Science and New Technologies (EGE),7 which proposed a set of nine basic principles for ethical frameworks for AI, based on the fundamental values laid down in the EU Treaties and in the EU Charter of Fundamental Rights. More recently, the AI4People project has surveyed the aforementioned EGE principles, as well as 36 other ethical principles put forward to date, and subsumed them under five overarching principles:

  • Beneficence − ‘do good’;
  • Autonomy − ‘respect for self-determination and choice of individuals’;
  • Justice − ‘fair and equitable treatment for all’;
  • Non-maleficence − ‘do no harm’; and
  • Explicability – ‘intelligibility and accountability’.

When translated into concrete requirements to achieve trustworthy AI, the EC HLEG suggests that the development, deployment and use of AI systems should meet seven key needs: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; environmental and societal wellbeing; and accountability. The EC HLEG provides a number of technical and non-technical methods by which these ethical principles can be hard-wired into processes, and by which trustworthy AI can be achieved at all levels of the development process – from the design stage through to deployment (see Figure 2).

Of course, such lists will never be exhaustive, and will need to be adapted to the specific use case in which the system is being used. It should therefore be a continuous process of assessing the processes, identifying requirements, evaluating solutions and ensuring improved outcomes throughout the entire lifecycle of the AI system.

Trust and thrive

"Without trust – both from the data owners and the AI users – adoption of AI by financial institutions will be slow"

Without trust – both from the data owners and the AI users – adoption of AI by financial institutions will be slow. Ensuring ethical, secure and robust use of AI is therefore imperative to allowing the technology to thrive. The work by the EC HLEG provides clarity on the regulator’s understanding of trustworthy AI in this respect, and can be seen as a positive step in the right direction. Its industry-agnostic approach promises a level playing field for the technology’s advancement across all industries – so important in a world where industries are becoming increasingly interconnected. Yet, at present, there are different jurisdictional regulatory approaches (or, in some cases, absences of approaches) to the development and use of AI. This is suboptimal, and raises the risk of different practices emerging region by region, with a key area of concern being regulatory arbitrage (companies moving AI operations to jurisdictions with lower standards).

Technology solutions are global in design and deployment. In turn, a common, internationally-recognised framework for the production, use and governance of AI is needed to mitigate some of the risks highlighted within this article.

With the availability of large pools of digital data, AI is gaining traction across numerous industries, with many use cases being developed. Financial institutions deal in lakes, not pools, so they have the chance to be at the forefront of this movement.

David Gleason is Chief Data Officer and Polina Evstifeeva is Head of Regulatory Strategy in GTB Digital, Global Transaction Banking at Deutsche Bank

________________________________________

Sources

1 Notes from the AI Frontier, McKinsey, September 2018. See https://mck.co/2W4tDlf at mckinsey.com
2 See https://bit.ly/2TCXPqY at oracle.com
3 See Forbes commenting on Facebook at https://bit.ly/2Hk3s6V, forbes.com
4 See https://bit.ly/2zewjTD at ec.europa.eu
5 See https://bit.ly/2IjQYf6 at ec.europa.eu
6 See https://bit.ly/2Hqc9gL at pdpc.gov.sg and https://bit.ly/2XGAxhs at www.mas.gov.sg
7 See https://bit.ly/2TKeJmI at ec.europa.eu

You might be interested in

This website uses cookies in order to improve user experience. If you close this box or continue browsing, we will assume you agree with this. For more information about the cookies we use or to find out how you can disable cookies, click here.