By Devon Kinkead, CEO and Founder, Micronotes
Last week, I attended the Bank Policy Institute’s FinTech Ideas Festival in San Francisco. The event brought together CEOs from the financial services and technology communities to discuss future ideas and challenges.
I met as many people and attended as many sessions as I could. One panel discussion was of particular interest to me as CEO of an artificial intelligence company: Automated Decision-Making—How Far Should We Let Machines Go In Making Decisions For Us?
I’d like to give my thoughts on two questions posed by the audience for this session.
The first was “How do you know that the machine is learning?”
One panelist gave the answer: “We look at the F measure.” Now, that is—technically—an accurate answer. But it’s very unlikely that the average banker who is trying to determine if they should invest in an artificial intelligence solution is going to know what that means.
I travel around the country meeting with financial institutions to discuss how Micronotes’ AI machine learning solution can help them engage with digital users and drive revenue growth. And one thing I learned long ago is that you have to be able to explain technology in a way that’s meaningful—and relatable—to the audience.
My answer to this question is: You measure the accuracy of the data to determine how much it is improving. Accuracy is the number of correct predictions divided by the total number of predictions made.
That’s AI brought down to the customer level. And getting people to understand what your company does, and why it can help them achieve their business goals, is a critical step in making a sale.
The second question was “How do you regulate AI in banking?”
This is another essential issue when it comes to helping bankers understand how AI will work in their institution. In fact, we did an entire webinar on this topic last year.
The elephant in the room on this topic is the General Data Protection Regulation (GDPR) that is in effect in the European Union, and has global ramifications.
The reality is, technology always runs ahead of regulators. Regulators end up relying on the intent of a regulation to understand a given situation. Regulators need to determine, for example, if a bank’s lending practices are biased in some way. It doesn’t matter whether the institution is using AI machine learning or a pencil and paper to establish its practices. The regulator needs to be able to conduct an audit to determine whether the practices are discriminatory.
How an institution may have gotten to the point of having discriminatory lending practices is irrelevant to the issue of whether those practices exist or not. That’s why regulating AI in banking isn’t really that much different from how regulators have been doing their jobs since the dawn of regulation. The use of AI in banking doesn’t change the process of determining whether bias exists.
Again, this is just another way to explain how technology works—in plain English—without delving into discussions of F measures, algorithms and the like, which don’t matter to most people, anyway.