Regulating AI in Banking — Part 3

In part 1 of this post, I presented definitions for artificial intelligence and machine learning, and looked at the issues surrounding how to regulate financial institutions' use of them. Part 2 delved deeper into the issue of how to regulate AI in banking and why regulation is necessary. Part 3 takes a look at the role played by providers of AI and ML solutions in helping clients comply with regulatory requirements.

By Devon Kinkead, Founder and CEO, Micronotes

As machines take on a larger role in creating and executing conversations with digital banking users, our clients need to know what report they can run to demonstrate their compliance with regulations. That’s something we’re going to build to help our clients manage future regulatory responsibilities as easily as they do today’s. That’s a key component of AI ethics, along with such pillars as ensuring there is no exclusionary bias in machine-learning systems and that tasks performed by machines are not beyond the reach of regulators. I'll discuss this topic and others on our June 13 webinar, Regulating AI in Banking, with former regulator Mark Casady (register now). 

For example, here’s another aspect of AI-driven banking that we’re already examining not only from a compliance perspective but also from the perspective of good business practices: When ML systems begin to conduct lending campaigns, there have to be safeguards designed into them that take into account the bank’s balance sheet, so that the amount of loans offered doesn’t veer into risky territory and stop those campaigns when the bank reaches its capacity to lend.

When you’re in the business of providing AI systems to clients, it’s not reasonable to expect the people who deploy our solution are going to understand the technology within it. But our bank and credit union clients do know how to measure its performance. So, as financial institutions bring in more AI systems, the business continuity plan will have to reflect that. They’ll have to look harder at the AI companies who are delivering solutions to ensure they are viable and will be there to support their clients.

Another aspect of the increasing reliance on AI and machine learning will be the increase in “machines regulating machines.” This already happens, but it will increase. Regulators themselves are using AI and ML to detect fraud and money laundering and to set policy direction and decisions.

AI and ML are helping the financial sector to improve efficiency, reduce costs and widen choices, which also increases concerns about data privacy. This is especially significant in light of the GDPR movement that’s underway in Europe and could expand to other regions, as well. AI, like any tool, can be used to enhance or undermine the goals of existing regulation. That’s why it needs to remain an extension of human intelligence, rather than allowed to become something totally autonomous.

The framework for regulating the marketing activities of financial institutions is already in place. The Consumer Financial Protection Board has levied billions of dollars in fines for violations ranging from mortgage-lending abuses to deceptive marketing practices for overdraft services. So as AI and ML become more autonomous, the tasks machines perform will be expected to adhere to the same compliance standards.

One of the more interesting—and challenging—aspects of regulating AI and ML is that regulation is not iterative and experimental, and it’s not easy to rewrite.  But AI is moving fast. So, both regulators and those they regulate are going to have to figure out how to effectively—and fairly—deal with the pace of change.

In the conclusion of this series, I'll discuss the responsibilities of solutions providers to ensure that AI and machine learning bring true value to financial institutions—without increasing their regulatory reporting requirements.