Regulating AI in Banking — Part 2

In part 1 of this post, I presented definitions for artificial intelligence and machine learning, and looked at the issues surrounding how to regulate financial institutions' use of them. Part 2 delves deeper into the issue of how to regulate AI in banking and why regulation is necessary.

By Devon Kinkead, Founder and CEO, Micronotes

One of the issues that arises with fast-moving technologies, such as artificial intelligence and machine learning, in a highly regulated industry such as banking is the question of legal liability should something go wrong. Since regulation may not be in place beforehand, who is responsible if a perceived violation occurs? Do you blame the AI, do you blame the programmers? Who do you hold responsible? These are the types of issues that former banking regulator Mark Casady and I will discuss during our webinar, Regulating AI in Banking on June 13 (register now). 

Could, for example, AI or machine learning activity cause a disruption similar to what occurred in April 2013 when the Associated Press website was hacked and a false report about explosions at the White House injured President Obama, causing a fast—but fortunately temporary—140-point drop in the Dow? If that were to happen because of AI or ML, where would the responsibility lie?

The financial meltdown of late 2008 resulted in a range of new regulations quickly being imposed on the banking industry. No one could have reasonably predicted the need for the specific rules until the events occurred that put the activities in the spotlight.

It’s important not to presuppose that AI is going to cause problems. The first step is to get visibility into what the AI is actually doing in any given use case. At Micronotes, we view the regulatory compliance requirements of our clients as an important part of our business as we build out our AI-driven marketing automation system. That’s why we’re going to hold our webinar to get the discussion going.

Chatbots, one of the most commonly used types of AI-driven technology today, are moving toward being able to give advice, which will certainly lead to regulatory scrutiny, especially as that capability evolves. And ML also plays a vital role in cybersecurity by providing advice on vulnerabilities and attacks.

Today, there are effective regulations in place for the conversations that take place using AI. But as we begin to automate those conversations further, the way we enable our clients to submit information for compliance review must evolve as the technology evolves. Currently, people create the campaigns that our clients employ to engage with their digital banking users. But we are moving to the point where machines will be able to take on the role of developing user-engagement campaigns, which will change the regulatory landscape.

Part 3 will look at the role played by providers of AI and ML solutions in helping clients comply with regulatory requirements.