Regulating AI in Banking — Part 1

By Devon Kinkead, Founder and CEO, Micronotes

“Whatever can go wrong will go wrong faster when computers are involved.” — Kirilenko and Lo (1)

This quote, crafted by two leaders at MIT’s Sloan School of Management, sums up what could happen from the growing use of artificial intelligence. This is especially true as AI is increasingly used in digital banking interactions, such as chatbots that can engage with users at any time.

So, the challenge for financial institutions is how to take advantage of the many benefits of AI to support users and grow revenue without running afoul of the many regulations that govern interactions with customers. And part of that challenge is the fact that the smarter technology gets, the more independent it becomes.

Regulating AI in Banking is the topic of the next Micronotes webinar on June 13 (register here). Mark Casady, a former member of the Financial Regulatory Authority board of governors, and I will discuss the potential compliance challenges the banking industry faces when it deploys AI and machine learning solutions for user engagement.

Before I dive into some of the key aspects of the topic, let’s define what we mean by artificial intelligence (AI) and machine learning (ML).

AI is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using the rules to reach approximate or definite conclusions) and self-correction. (2)

ML is a category of algorithm that allows software applications to become more accurate in predicting outcomes without being explicitly programmed. The basic premise of machine learning is to build algorithms that can receive input data and use statistical analysis to predict an output while updating outputs as new data becomes available. (3)

To prepare for this webinar, I read a number of articles that address one or more aspects of regulating AI in banking. In a series of posts leading up to the webinar, I’ll highlight the most compelling points—and we’ll touch on many of them during the session.

First, let’s establish that the goal of regulation is to change behavior by imposing binding limits on legally acceptable behavior. The framework for regulating the marketing activities of financial institutions is already in place. The Consumer Financial Protection Board has levied billions of dollars in fines against a wide range or organizations for violations ranging from mortgage lending abuses to deceptive marketing practices for overdraft services. So as AI and ML become more autonomous, the tasks machines perform will be expected to adhere to the same compliance standards.

In banking, as well as many other industries, regulation typically lags behind innovation. When a new way of doing something debuts, it’s more often than not outside of the regulatory perimeter, because it didn’t exist when the regulations were created. AI has been used in banking for a variety of applications, and new uses are being developed all the time.

Part two of this post will delve deeper into the issue of how to regulate AI in banking and why it is necessary.

(1) http://alo.mit.edu/wp-content/uploads/2015/06/Moores_Law_vs_Murphys_Law_Spring_2013_JEP.pdf

(2) https://searchenterpriseai.techtarget.com/definition/AI-Artificial-Intelligence

(3) https://searchenterpriseai.techtarget.com/definition/machine-learning-ML