Regulating AI in Banking — Part 4

In part 1 of this post, I presented definitions for artificial intelligence and machine learning, and looked at the issues surrounding how to regulate financial institutions' use of them. Part 2 delved deeper into the issue of how to regulate AI in banking and why regulation is necessary. Part 3 looked at the role played by providers of AI and ML solutions in helping clients comply with regulatory requirements. The final installment discusses the responsibilities of solutions providers to ensure that AI and machine learning bring true value to financial institutions—without increasing their regulatory reporting requirements.

By Devon Kinkead, Founder and CEO, Micronotes

One of the major benefits of machine learning is its ability to understand complex systems. For example, you can look at hundreds of variables and ML systems can identify which of those provide the most information gain. ML won’t necessarily tell you the direction in which each variable is likely to move, but I would assert ML does help users understand complex systems.

Because of that, there may be some tools regulators can use to try and understand the financial system by looking at, for example, the information gain on a particular product or service. So, what are the main contributing variables to determining that someone will default or someone is a likely candidate for attrition? ML is very good at surfacing information like that. And that type of information has tremendous value. That’s why I didn’t think it was an exaggeration when I read that “data is the new oil of the modern economy.” (1)

I'll tackle this and related topics on our June 13 webinar, Regulating AI in Banking, with former regulator Mark Casady (register now).

Although AI and ML can bring the same advantages to small financial institutions it does the largest competitors, it’s also true that smaller organizations with limited resources may not be able to deploy AI and ML to the same extent as the bigger players. A small bank or credit union that isn’t able to take full advantage of the benefits of AI is likely to have its existence marginalized, at best, because it just won’t be able to deliver the value that customers expect. That’s why data pooling is becoming popular with smaller institutions. Credit unions are already doing it with firms such as OnApproach.

Financial institutions already are using ML to determine what factors are influencing sales (e.g., information gain). The most popular usage categories are customer focused (which is what we’ll examine in the webinar), operations, trading and portfolio management, regulatory compliance, and supervision. 

While ML cannot determine causality, it can show you with which parameters are driving a particular set of predictions. That’s why it’s important to remember that augmented intelligence is a tool. It’s an extension of human intelligence. As long as it continues to be that, it should dispel any notions that AI is going to take over the world by behaving like HAL 9000 in “2001: A Space Odyssey.”

As firms develop AI-ML systems, they must take regulatory concerns into consideration, such as “Is the objective of the system compliant with pertinent industry regulations?” Failing to do so could lead to unpleasant surprises down the road.

There are efforts underway to try and make the black box of AI more transparent, but I think it’s a fool’s errand. The idea that someone can explain how a given algorithm works with any degree of accuracy or meaning is not reasonable. Bankers should stay focused on reports around their regulatory obligations and understanding whether the AI and ML systems they’re using are generating the desired business results. In the end, AI and ML are just tools, and users don’t have to know the intricacies of every tool they use.

That said, in order to achieve broad usage of AI throughout the banking industry, AI developers must create hands-off systems that are invisible to the end user. That doesn’t mean you don’t have regulatory compliance reports that need to be run on the system. But failure to create user-friendly AI-driven systems will concentrate the advantages of AI in the large firms that can hire the skilled staff to use them. If we don’t make the systems easy to use, then smaller firms will have to hire data scientists to run the systems, which they cannot afford to do. And that will marginalize smaller banks and credit unions.

Combining AI and ML with human judgement is the best answer to the risks of these revolutionary technologies.