The financial services industry is increasingly making use of advanced analytics and machine learning technology.
A recent survey by the World Economic Forum and the Cambridge Centre for Alternative Finance (CCAF) found that 77% of FinTechs and incumbent financial institutions anticipate that AI will possess high or very high overall importance to their business within two years.
While the benefits of this technology are clear, it’s important to be aware of all the implications of using machine-based decision-making. We need to be able to explain AI models and the choices they make. Doing so will ensure AI is being used for good.
Responsible use of AI
Financial institutions need to responsibly consider and use the capabilities AI provides. Their customers need to understand how AI and big data analytics affect the services they receive. And they need to understand the decisions being made without feeling negatively impacted by automation.
In 2019, a Genpact study found that 54% of customers were comfortable with companies using AI to access personal data to improve their customer experience.
Reservations and uncertainties still exist, and it is the responsibility of banks and financial institutions to be transparent with customers about how and why AI is used.
There are five key challenges financial institutions face:
In order to build trust in AI technology, decisions made using AI need to be explainable to people.
Explainable AI (XAI) is an emerging field that seeks to address the lack of understanding about how AI systems make decisions. The discipline involves examining models to try to understand the steps that lead to a specific decision.
When AI developers are able to explain why and how certain decisions are made, they have more control. They can check for bias, ensure they are meeting regulatory requirements, and more easily improve system design. Most importantly, they give users confidence in the system.
AI learns from the data we feed it. For this reason, it’s not uncommon for AI to reproduce racial and gender biases that exist within society.
In 2019, Goldman Sachs’ credit card practices were put under review after David Heinemeier Hansson, a Danish entrepreneur, tweeted that he received twenty times the credit limit his wife, Jamie Hansson, was granted for an Apple Card, despite her having a higher credit score.
If biases go unchecked, we risk creating a technological environment that bakes in discrimination. In order to build an inclusive society, it is vital that checks are in place to ensure AI is not replicating biases.
3. Respecting privacy
AI is built on data. But much of that data is sensitive and private. Consumers need to know that their personal data is being used safely and securely, in a way that preserves each individual’s privacy.
Historically, privacy has not featured as a central concern when building AI models. As a result, many models rely on storing data in order to update and improve. This is not conducive to maintaining data privacy.
Fortunately, the latest AI algorithms do not necessarily need to rely on storing data. One trained, they can perform functions based on real-time data without the need to preserve a backlog of sensitive information. When developers of AI technology take into account privacy concerns from the start, AI can do a much better job of protecting information.
A recent study by NTT DATA Services and Oxford Economics found that business leaders vastly underestimate the ethical challenges presented by AI.
There are some obvious ways that AI might produce unethical outcomes in banking. Algorithms might start rejecting loan applications from women or minority groups. Or they might charge higher rates of insurance for people of BAME background.
It is vital that unethical behavior such as this is prevented. AI developers must fully understand the impact that data-driven predictions might have on people. And they must have measures in place to deal with ethical concerns as and when they arise.
Making AI more transparent and explainable is one way to tackle bias and ethical concerns. But it’s also important that people take responsibility and are accountable for the decisions that AI makes.
The right frameworks need to be in place to provide guidance on AI use and allow for the monitoring of models for bias and discrimination. Taking proactive steps to recognize problems that may arise will ensure that AI can deliver the best possible results.
NTT DATA and ethical AI
NTT DATA is committed to using AI to help build a sustainable, diverse, inclusive and transparent human-centered society. We believe that, with the right checks in place, negative impacts can be mitigated and this technology can be used for the good of humanity.
We’ve developed AI guidelines that align with the United Nations’ Sustainable Development Goals. These guidelines ensure that AI solutions are modelled with sustainability, diversity, and inclusivity in mind.
Applying these guidelines to the financial industry give organizations a robust framework upon which practical action is taken to ensure best practice with AI.
AI holds huge potential for the financial services industry. Those organizations that take a responsible approach to the deployment and scaling of AI technology are likely to see the biggest rewards going forwards.
Get more information and donwload our whitepaper A business leader's guide to AI governance in finance.