Home / Latest News / You may be interested in / AEB Informs / How to improve the banking user experience with Artificial Intelligence

1.- What is Artificial Intelligence?
Artificial Intelligence (AI) is a discipline that makes it possible to automate decision-making and the execution of actions through the use of computer systems and the application of mathematical techniques to large volumes of data.
AI can be used to carry out different types of analysis: descriptive, which seeks to explain what has happened (the most common); predictive, which seeks to anticipate what will happen; and prescriptive, which recommends what to do to achieve a specific goal.
Today, AI is a common tool in everyday activities such as internet searches, facial recognition, social media recommendations, or route planning in navigation apps.
Some AI-based technologies have existed for decades, others are relatively new, and many, such as machine learning (machine learning), are evolving rapidly thanks to greater data availability and improvements in technology, which make it possible to manage large volumes and sources of data.
AI technologies have already demonstrated a strong ability to detect patterns, although they are not able to learn associatively like humans, who can analyse from a broad perspective and connect all our knowledge. The future challenge for AI is to improve its ability to understand and respond to the environment at a deeper level. The ultimate goal is for these technologies to boost the technological and industrial capacity of economies, while always serving people and improving their well-being.
2.- Artificial Intelligence in the banking sector
One of the sectors that has made the greatest use of advances in AI is the banking sector. Thanks to these technologies, the user experience is being improved, efficiency in internal processes is being increased, and security is being strengthened, among other benefits.
Improvements in customer services
AI has the potential to change, in the coming years, the way financial services users interact, increasing convenience. With advances in natural language processing, sentiment analysis, and machine learning, it will be possible to hold advanced chat- or voice-based conversations with customers and respond to highly complex queries. And, perhaps more importantly, systems will be able to learn from their own interactions or from those of other agents performing a specific task. This will make it possible to automate part of the processes carried out in customer service departments, providing customers with greater convenience and agility when engaging with their bank.
At present, a large part of investment in AI models is being directed to marketing and product development departments. Thanks to AI, it is becoming possible to analyse customer behaviour and understand it in greater depth, with the aim of offering a better user experience and products tailored to customers’ expectations and needs, at the very moment they need them.
By way of example, robo-advisors (robo-advisors), systems often based on AI and designed specifically to provide automated financial advice, offer a range of uses. Through websites or mobile apps, they can, for example, offer us the investment products best suited to our needs based on our profile, notify us of upcoming payments due, or analyse our spending patterns to indicate how we can increase our savings.
Probably one of the most promising uses of artificial intelligence and data analysis is its application to risk assessment models. The ability to access more information about a credit applicant and to process data that technology previously could not handle can improve the process of building the applicant’s credit profile and generate more accurate information about the risks of a transaction. This makes it possible, on the one hand, to reduce the number of false positives, improving the sustainability of the financial system and avoiding situations in which the customer is unable to meet their financial obligations. And on the other hand, to enable more people to access financing by reducing the number of false negatives in which, for example, the system might deny credit due to an imprecise analysis of the available information or by failing to take into account variables that new AI systems are able to analyse. In short, AI can help profile customers more accurately, facilitating responsible borrowing and preventing over-indebtedness.
Improvements in internal processes and security
Financial institutions are beginning to implement AI in order to improve security, reduce costs, and increase operational efficiency.
Thus, many banks and insurance companies are using AI-based process automation software applications for, for example, the digitisation and auditing of documentation, biometric authentication, liquidity risk and prepayment management, or improving operational forecasting and planning processes.
Undoubtedly, fraud control has been one of the first banking-sector activities to benefit from the application of artificial intelligence. In this field, programmes have been developed that identify behavioural patterns that enable the detection of abnormal conduct. For example, an external fraud detection system can learn to identify fraudulent patterns based on the behaviour of the bank’s customers. That is, when the user deviates from usual patterns, the system issues an alert. Several scenarios may then arise depending on severity: from a review of the transaction by the bank and a call to the customer, to blocking the card, if necessary.
Another area in which AI has advanced in recent years is automated property valuation. The commitment to technology and, especially, to machine learning makes it possible to calculate the market price of all properties, taking into account factors such as their characteristics, location, similar properties sold in the same area, etc.
Finally, it should be noted that, in addition to banks and insurers, regulatory and supervisory bodies themselves will gradually incorporate AI systems to improve financial supervision and ensure regulatory compliance. Likewise, central banks expect AI to help them produce real-time forecasts, using “big data” technologies to define and make their monetary policies more effective.
3.- Challenges in the use of AI
The implementation of AI creates certain challenges that are currently being analysed by the European Commission[1] and the different Member States, as they affect all companies and institutions that choose to implement these systems. Some relate to the explainability of models, the generation of bias, or the allocation of responsibility for the consequences of using these systems. Others relate to the necessary corporate reorientation and the definition of policies for attracting, developing, and retaining “digital talent” or “AI talent”.
It is also worth mentioning the importance of citizens’ digital education, as they must be aware of decisions that substantially affect them based on the processing of their data and the value of their data. Some of the most popular internet services—social networks, search engines, and apps—appear to be completely free, when in reality consumers are handing over their data as a form of payment, often without knowing it.
Finally, it is important to highlight the challenges arising from the application of the General Data Protection Regulation (GDPR). The new regulation has established a high level of protection for the privacy of personal data in the EU, which will help build trust in the use of technology. However, companies now face the challenge of applying this new framework and need the support of authorities to understand the interactions between some of the principles established by the GDPR—such as the principle of data minimisation or consent specificity—and AI applications, developing interpretations compatible with the necessary deployment of AI.
From the perspective of the financial system alone, AI also creates new challenges:
Impact on financial stability. One of the main concerns of authorities is the impact that the widespread use of these technologies could have on financial stability, due to greater interdependence between institutions that use this technology and the large technology companies that provide AI solutions and models, but which fall outside the control of regulators and supervisors.
Learning by financial authorities. Financial authorities must deepen their understanding of artificial intelligence in order to approve and supervise risk models based on it. Concerns about the use of algorithms, particularly regarding issues such as their auditability, the ability to explain their results, or their evolution over time, must be resolved through continuous learning and dialogue to overcome the perception that these models are “black boxes” with unknown functioning. The use of controlled testing environments, such as regulatory sandboxes, would facilitate the acquisition of this knowledge, as well as a better understanding of the benefits and risks of these technologies.
Regulatory and supervisory adjustment. Likewise, it is important that regulatory and supervisory authorities work to ensure the adequacy of current governance and risk management requirements in order to mitigate new emerging technological risks, rather than issuing new requirements. The objective should be to avoid imposing excessive burdens on the banking sector, such as limiting the use of cloud computing, which could slow down the adoption of AI models and the leveraging of their benefits compared to other non-banking competitors.
Similarly, there are sectoral regulations and guidelines that may set limits or barriers to the use of AI in the provision of financial services, such as those that establish prescriptive rules on how to carry out creditworthiness analyses and that may end up reducing institutions’ ability to innovate and improve these analyses through the use of other data and AI systems. This inevitably places banks at a competitive disadvantage compared with other unregulated competitors.
Respecting the principles of technological neutrality and ensuring a level playing field for companies carrying out the same activity must be a priority for regulators and supervisors. The focus of authorities on the future development and adoption of AI should therefore be on the outcomes of applying this technology and the impacts on the specific activity, rather than on the entity using it.
Banks’ priority is to protect their customers’ personal data and privacy. However, this must be fully compatible with the possibility for institutions—and, in general, all companies that have customers’ consent—to make the most of the data they have available in order to provide better services and manage their risks more effectively, for the benefit of customers and the market as a whole.
Data is the fundamental element for harnessing the benefits of AI. Access to external data sources that can complement the internal information companies may have about their customers would make it possible to gain better knowledge and provide better service.
In the current context of enormous changes driven by technological innovation, it is necessary to ensure that banks can apply technologies such as AI, safeguarding customer protection and for the benefit of the services they provide.
The development and application of models based on data management and AI are becoming fundamental elements for the future to improve financial services, making them more efficient and better adapted to customers’ needs, while also strengthening aspects such as security and trust.
In the face of some of the challenges currently under debate—bias, explanation of results, or responsibility for the use of models—it is important to advance the ethical debate generated by the use of AI, but always from a societal perspective and with the aim of designing a general framework applicable to all sectors. Any regulation or supervisory regime should be based on the principle of same activity–same risk, and not on the type of entity.
Likewise, it is necessary to work on internal knowledge and understanding of AI models, to avoid a simplistic view that considers these technologies to replace human beings or to constitute black boxes, completely opaque, with no possibility of explanation or control whatsoever.
The pace of technological progress makes it necessary for authorities to accelerate as well, in order to understand and integrate the use and application of AI within regulated activities. A deeper understanding of this technology will make it possible to achieve the right balance between its application in the provision of services and the adoption of measures to mitigate the risks that may emerge from its use in financial activity.