Banks are Eager to Tap into AI, but Must First Address the Perils

3 min read

When it comes to artificial intelligence (AI) and banking, there’s a great promise – more business, less risk. But the intersection of AI and financial services can be fraught with peril, too.

There’s the potential – and in some cases reality – for algorithms to reinforce social biases and disenfranchise minorities. This is especially relevant to lenders who use AI models to predict default risk.

It’s a topic we’re likely to keep hearing about as it gains traction among consumers and policymakers. Consider a recent proposal by Democratic presidential candidate, Sen. Cory Booker, that would require big companies to test the “algorithmic” accountability of their high-risk AI systems, including technology that makes important decisions based on sensitive personal information. And the development of “safe and trustworthy” algorithms is a major objective of the White House’s new AI initiative.

Meanwhile, banks are interested in leveraging the power of AI. Interested, but not ready. AI is being discussed at the board or executive team level at half of all financial institutions, according to The Financial Brand. But only one in five bankers feels their institution has the necessary data analytics skills.

What banks and all institutions that impact lives and livelihoods must understand is that AI models are only as good as the data they are built on. And if that data reflects racial, ethnic or gender biases, so will the decisions or actions these models recommend.

I had the opportunity to talk about AI and banking at American Banker’s Retail Banking 2019 Conference in March in Austin. Joining me was Nelly Rojas-Moreno, chief credit officer for LiftFund, one of the nation’s largest small business lenders. Here are the four steps we conveyed that banks must take to keep bias out of their predictive models.

  1. Your data set must be diverse.

AI models are trained on historical data. A model is less likely to produce biased predictions if there is significant diversity of lending cases in your training database.

A lender that typically serves high-income and established businesses and fewer low-to-moderate-income entrepreneurs may wind up with a model that excludes those entrepreneurs. If a significant number of applicants in this group are from a protected class, this can lead to bias.

By the nature of its work, LiftFund, a Community Development Financing Institution (CDFI), has a diverse borrower base. But other lenders may need to sample cases carefully to ensure a diverse training set of borrowers for their AI models to avoid bias.

2.  You must use multiple features for each case in your model and make sure they’re the right ones.

LiftFund’s borrowers have an average FICO score below 600. Yet the lender’s repayment rate was a stunning 95% in 2017 and increased to 97% in 2018 after implementing its AI model. A model depending only on credit scores would have denied most of these borrowers.

Adding a second variable, such as a third-party bankruptcy index wouldn’t solve the problem. Credit score and bankruptcy risk provide important information about potential borrowers. But used alone, they can create a proxy variable for a protected class. In the case of LiftFund’s model, we use nearly 80 additional variables measuring cash flow, credit, collateral and character, to create a holistic decision model. And we review each decision to makes sure no one variable is a key decision-driver.

3. You must be able to explain your model.

It’s not enough to build a data set with diverse cases and dozens of metrics. You must think outside the black box. Know that if you can’t explain how each decision was made, you can’t be sure that the model is truly unbiased.

In a heavily regulated industry like lending, transparency in explaining denials is crucial. The more metrics you use, the more complicated your model becomes. But advances in AI and machine learning methods have improved the transparency of decision models to the point where they can explain the specific metrics or combination of metrics that led to denial or approval.

4. You must review and refresh your model every few months.

While AI has increased efficiency in many areas of lending, it is still no substitute for human judgment. AI in lending decisions is most effective when it is supporting underwriters, not replacing them. LiftFund’s underwriting and loan officer team prove that. Remember, this team achieved 95% repayment from borrowers with average FICO scores below 600.

After creating the model, we made sure this team of experts reviewed the technology to ensure it reflected not only the data but also their judgment. We also planned to review the model every few months. Changes in the broader economy can affect the impact of some attributes. Keeping on top of those changes is important to ensure a sustainable, unbiased model.

If we learned anything from the experience, it’s that the importance of authentic human interaction remains key in this AI age. At the Retail Banking Conference in March, almost every panel and side discussion mentioned that those banking customers – consumer or personal, small or large – still want to deal with a person at all stages of the banking process.

Bottom line: Banks and their lending decisions affect people’s lives. With the help of AI, they can increase business and reduce risk. Even better, they can do it while making fair, unbiased decisions that help more people achieve their financial hopes and dreams.

Keith Catanzano Keith Catanzano is the co-founder and partner at DC-based 2River Consulting Group. Keith created LIFT, a SaaS-based AI and data analytics platform that helps financial companies -- lenders in particular -- use all of their data to simplify decision making, reduce risk and increase sales and profits. He has previously written for Credit Union Journal.

Leave a Reply

Your email address will not be published. Required fields are marked *