Fair Lending in the digital age

 

AI can deliver benefits if financial companies properly manage the risks 

 

From CashApp to interactive teller machines, there is no denying that financial services have been irreversibly affected by the digital age. One area of digital interest lately is the use of artificial intelligence (AI) in credit underwriting, in part because of its notorious friction with the fair lending laws and regulations. As bank regulators have explicitly indicated their renewed focus to address discriminatory practices in underwriting, lending and other aspects of financial services, it is important to embed fairness in whatever machines or methods we use to grant or deny credit. 

 

Fair lending rules have substantial implications for the use of machine learning (ML) and alternative data in credit underwriting. AI presents fair lending risks, but lenders can mitigate those risks in the current regulatory environment. The clear goal of AI in underwriting is to provide better insights into the creditworthiness of consumers and ultimately make more accurate, efficient credit decisions. The ultimate question remains: have lenders using AI or ML to make credit decisions achieved this goal without creating an inappropriate level of fair lending risk?

 

 

AI, machine learning and alternative data

 

 

AI defined

 

AI is essentially any thought process demonstrated by a machine rather than a human. It’s an incredibly broad swath of technology that can influence everything from the advertisements you see on Facebook, to the checks you deposit with your banking app, to the product recommendations you see on Amazon. AI carries a number of benefits and challenges, but in many areas of society it is being used to accomplish tasks more efficiently and effectively.

 

 

ML defined

 

ML, a branch of AI, consists of algorithms that take massive amounts of raw data and process them to unveil underlying trends or patterns. Those trends or patterns are then used as instructions to develop more refined processes. Importantly, ML has the characteristic of being able to learn and improve from experience without being explicitly programmed to do so in advance, putting it more in line with how humans think than how computers (traditionally) think. When combined with computers’ advanced processing power, ML’s “learning through experience” capability enables astounding feats to be performed in record time. ChatGPT is a great example of ML. The chatbot uses huge amounts of information derived from the internet for “deep learning.” As a result of this learning process, ChatGPT is able to interact with consumers and provide specific and relevant responses rather than reciting canned responses.

 

 

Alternative Data defined

 

Alternative data, also referred to as nontraditional data, derives its name from the use of unconventional sources of information to compile data for some type of decisioning or process. Alternative Data can be mined from a variety of sources including cell phone bills, rent history, utility payments, insurance claims and certain electronic transactions such as deposits and withdrawals, according to the Consumer Financial Protection Bureau (CFPB). Alternative data is often seen as a way of making decisions (e.g., credit underwriting) without having to go through traditional informational channels (e.g., one of the three traditional credit bureaus). Either AI or more traditional methods—including non-tech methods, such as “dumpster diving”—can be used to mine alternative data; however, AI and computational processing have given alternative data a massive boost in recent times, especially with the sheer amount of alternative data floating through cyberspace.

 

One large subset of alternative data within the financial industry is cash flow data, which according to the Federal Reserve consists of a range of metrics related to income and expenses (e.g., fixed expenses such as housing, amount of variable expenses, etc.) and how a borrower has managed accounts over time (e.g., residual balances). Cash flow data helps to assess whether a borrower is able to meet new or existing obligations by establishing a timeline of past income and expense activity. In this way, cash flow data essentially acts as a collection of proxy variables for what lenders are ultimately seeking to uncover — creditworthiness; more specifically, a borrower’s ability and willingness to repay a debt obligation. 

 

For example, a consumer with poor or insufficient loan history on a credit report nevertheless may have an outstanding insurance payment history uncovered by Cash Flow Data. Several such outstanding payment histories, considered together, may demonstrate to a lender that the consumer is likely enough to repay a potential new credit obligation, whereas a credit report alone would not.  

 

Another significant type of alternative data is what some call “big data,” or more appropriately for financial purposes, “fringe data.” This vast collection of information, often from social media platforms, can include data points that are an even bigger departure from traditional credit scoring. Drilling down even further into consumers’ personal lives, fringe data can encompass a consumer’s online browsing and shopping habits, occupation, education, current location at any point in time (geolocation) and so on, according to the Federal Reserve’s Consumer Compliance Outlook. Its predictability value, however, often hinges on weak correlations with protected classes, making it particularly susceptible to fair lending vulnerabilities. 

 

 

 

A brief summary of Fair Lending

 

“Fair lending” is a concept of law underpinned primarily by two federal laws, the Equal Credit Opportunity Act (ECOA) and the Fair Housing Act (FHA). The central premise of fair lending is that no creditor shall discriminate against a borrower in a credit transaction on a “prohibited basis,” which in turn are widely understood protected classes within American society: race, color, ethnicity, country of origin, sex (including gender identity), age, marital status, handicap/disability, military status or the assertion of rights under consumer credit laws. Largely borne out of civil rights legislation in the 1960s and 1970s, fair lending seeks to put traditionally advantaged and disadvantaged groups on an equal playing field, free of undue influence or bias.

Fair Lending violations are often articulated in three different varieties:

  • Overt discrimination aligns with the traditional concept of discrimination in lending — put plainly, it’s when lenders openly discriminate against borrowers based on their protected class.
  • Disparate treatment occurs when individuals are treated differently based on a prohibited basis. Individuals affected by disparate treatment may ultimately receive an extension of credit (removing any question about overt discrimination), but often they are made to submit to more onerous underwriting, offered less favorable credit terms and/or forced to submit to more strenuous servicing requirements than others in a more traditionally favored class. Disparate treatment can be shown via “comparative evidence” when a lender shows bias through acts of omission or helps certain classes of borderline applicants toward credit but not others.
  • Disparate impact is less obvious on its face, and thus is more likely to pervade the AI landscape. It arises when a lender applies a neutral policy to all credit applicants, but that policy excludes or burdens certain protected classes. A common example of disparate impact is the supposedly neutral requirement of homeownership to receive credit, which is a requirement that often negatively affects disadvantaged classes of people. If disparate impact were to be found by a regulator, the only way for a lender to avoid a discrimination violation according to the Federal Reserve is to show that both: (a) the policy or practice is justified by a business necessity and (b) there is no alternative policy or practice that could serve the same purpose with less discriminatory effect.

 

 

The benefits and risks of AI in lending

 

In its ideal form, AI (which we’ll use to include ML and Alternative Data) has the potential to help both lenders and borrowers in substantial ways. Lenders stand to benefit from more predictive and accurate modeling, which could lead to better credit decisions (and by extension healthier loan assets). AI also leads to more efficient and cheaper processes, enabling lenders to handle more business and even pass on cost savings to happy, loyal customers. Borrowers get a leg up with AI in the form of reduction in human bias, new opportunities for individuals who were “credit invisible” under the old lending paradigm, and end-result loan products that are more specially tailored to their needs. For all the efficiencies that automated underwriting, decisioning and servicing presents, lenders and borrowers also must grapple with the other side of the coin. AI innovations could carry substantial risk because lenders may neither fully understand nor be able to manage the AI algorithms used by their products or vendors, leading to disparate treatment or disparate impact outcomes for borrowers. To begin, the “black box” nature of lending algorithms makes it difficult to understand their inner working and thus determine whether they may be yielding discriminatory results. Regulators are keenly focused on ensuring that the use of AI in financial services does not “reinforce or exacerbate old biases and discriminatory practices,” in the words of Acting Comptroller of the Currency Michael Hsu.

 

Additionally, machine learning algorithms can yield combinations of borrower characteristics that simply predict race or gender, factors that fair lending laws clearly prohibit; a memo from the House Financial Services Committee coins these “unprotected inferences,” and they could be used as an end-around to further discrimination. As a further concern, human operators could literally fall asleep at the switch and get too far removed from control over their automated systems, giving rise to a system those operators would not be able to control even if they wanted to.

 

A collateral point of worry is the lack of accountability that comes with software/hardware processes, which are prone to bugs and malfunctions that take time and resources to correct. Often these resources needed to correct bugs or errors are more substantial than would be the case when an accountable human is responsible for the error.

 

One risk that has been of particular concern to the CFPB and bank regulators is the use of ML simulations based on input factors and training methods that aren’t a reflection of actual reality, leading to perverse (and possibly discriminatory) results.

 

In a 2018 speech, former Federal Reserve Governor Lael Brainard provided an example of unintentionally factoring in bias into an AI model. A large software company developed a hiring tool based on algorithmic modeling and in doing so, it “trained” the program with data only on past hires that had been successful. On the face of it, this may seem perfectly reasonable, but the small detail that was unaccounted for is that past successful hires had been overwhelmingly male.

 

As a result, the program “learned” from this and began excluding resumes of certain all-female colleges. If a machine learning model is used to screen job applicants, and if the data used to train the model reflects past decisions that are biased, the result could perpetuate past bias. For example, looking for candidates who resemble past hires may bias a system toward hiring more people like those already on a team, rather than considering the best candidates across the full diversity of potential applicants.

 

Another risk is the third-party vendor relationship itself. Many institutions simply do not have the in-house expertise to originate and market AI-based loan products. As a result, many of these institutions initiate relationships with third parties that have extensive histories in the AI and Alternative Data space. However, with this expertise comes an added layer of opacity. Third-party vendors will sometimes use their advantage by instituting regulatorily questionable practices, like subprime referral programs, which can result in applicants being denied for a bank loan product but referred instead to a subprime loan product offered by the vendor.   

 

 

 

Regulatory regimes for AI and Fair Lending

 

Despite the many promises of AI, for federal bank regulators there remain the key questions of: how do we hold lenders accountable to the law with unaccountable software pulling the strings? Alternatively, how do we avoid stifling innovation? The difficulty in answering these questions has led to what could at best be described as a fog of uncertainty.

 

 

Regulatory regime

 

There is no question that the AI innovations in financial services are advancing much quicker than the laws and regulations that govern them. Regulators have not shied away from admitting that they are essentially learning along with and from the industry. They have generally expressed that they are open to facilitating the progression of innovation if it can be done consistently with the Fair Lending framework.

 

The CFPB has gone the furthest of the federal regulators in addressing this type of uncertainty. In September 2019, the Bureau’s Office of Innovation launched three new policies to facilitate innovation and reduce regulatory uncertainty:

  • A revised Policy to Encourage Trial Disclosure Programs (TDP Policy)
  • A revised Policy on No-Action Letters (NAL Policy)
  • The Policy on the Compliance Assistance Sandbox (CAS Policy)

The TDP Policy and CAS Policy each provide a legal safe harbor that could reduce regulatory uncertainty in the area of AI. The NAL Policy is similar but slightly different in that it is essentially a statement that the CFPB does not intend to pursue enforcement action against a particular entity in practice, usually subject to a series of conditions. These programs aim to foster innovation while also recognizing that there will be errors that may require compensation for consumers harmed through the experimental process.

 

It’s important to remember, though, that these are by no measure all-encompassing solutions to juggling innovation and risk in the lending industry. The CFPB is but one regulator (albeit a strong one) and there are many others to be considered — the Federal Deposit Insurance Corporation, the Office of the Comptroller of the Currency, the Federal Reserve Board and state regulators to name a few. A CFPB no-action letter may hold weight in consideration, but it’s not determinative and only highlights the need for consistent interagency laws and guidance.

 

One prime example of the NAL Policy in action is with the automated lending platform Upstart. In its own words, Upstart uses technology to identify “high-quality borrowers misunderstood by the FICO system.”  In the CFPB’s 2019 Fair Lending Report, the agency provided an update on Upstart’s no-action letter, which was originally issued in 2016. As part of the no-action letter, Upstart agreed to “regularly report lending and compliance information to the CFPB to mitigate risk to consumers and aid the bureau’s understanding of the real-world impact of alternative data on lending decision-making.”

 

The CFPB reported that the Upstart model approves 27% more applicants and yields 16% lower average annual percentage rates for approved loans than the traditional model against which it is compared. "Near prime" consumers with FICO scores from 620 to 660 are approved approximately twice as frequently and consumers with yearly incomes under $50,000 are 13% more likely to be approved. The reported expansion of credit access reflected in the results provided occurs across all tested race, ethnicity and sex segments. In many segments, the results show that the tested model significantly expands access to credit compared to the traditional model. Most importantly, the CFPB declared that the results “show no disparities that require further fair lending analysis under the compliance plan.”

 

 

 

How it’s evolving

 

Despite the initial optimism regarding Upstart’s seeming ability to achieve the Herculean task of balancing underwriting innovation with fair lending compliance, there remain concerns and uncertainties within the current regulatory regime. Regulator guidance under the current laws and regulations is shifting to emphasize regulatory compliance more strongly within the AI or ML framework rather than simply promoting innovation. The CFPB’s 2022 guidance on the intersection between AI and the Equal Credit Opportunity Act (ECOA) notes that “creditors who use complex algorithms, including AI or ML, in any aspect of their credit decisions must still provide a notice that discloses the specific principal reasons for taking an adverse action.” The guidance warns lenders that their use of complex algorithms in the credit decision process does not shield them from the repercussions of potential regulatory violations.

 

The conjecture is that creditors must clearly understand the technological processes employed in their credit decisioning activity, regardless of its complexity, in order to meet regulatory requirements. This is a clear departure from the CFPB’s 2020 guidance on how the use of AI or ML models affects the ECOA’s adverse action notice requirement. The CFPB has also outlined options to ensure that computer models used in home valuations are accurate and fair.

 

According to the CFPB, these automated valuation models are still susceptible to bias and inaccuracy, posing potential fair lending risks. To further demonstrate the shifting priority, the five federal financial regulatory agencies requested input from financial institutions and other stakeholders to better understand the use of AI by these institutions. The agencies were particularly concerned with the governance, risk management and controls over AI as well as “challenges in developing, adopting and managing AI.”

 

 

 

Mitigating Fair Lending and AI risk and uncertainty

 

Clearly, a substantial amount of risk and uncertainty results from combining AI with lending. Regardless of that risk and uncertainty, the future appears to be pulling in the direction of AI, and we remain convinced that AI’s benefits will outweigh its drawbacks in financial services in the long run. In addition to conducting traditional self-testing and self-evaluations such as mystery shopping and comparative file reviews, we recommend the following six tips for lenders to help mitigate Fair Lending risk and uncertainty when using AI to evaluate borrowers:

 

Recognize potential shortcomings and pitfalls when using AI and alternative data for credit decisioning. A well-designed compliance management program provides for a thorough analysis of relevant consumer protection laws and regulations to understand the opportunities, risks and compliance requirements before engaging in the use of alternative data. An effective compliance management program should routinely consider the potential impact of AI use on new laws (e.g., Dodd-Frank Act Section 1071 on small business lending data reporting) as well as potential concerns with proposed laws and regulations.  

 

Understand the underlying functionality of AI algorithms. Regardless of the credit evaluation method used — ML algorithms or more conventional methods — lenders must understand the decisioning process to provide applicants against whom adverse action is taken with an accurate statement of reasons. Lenders may also consider the evaluation and use of ML algorithms and seek to establish where they are used. Further, lenders should consider employing a framework of “Explainable AI” to reduce the risk of black box confusion. According to the National Institute of Standards and Technology, explainable AI may be best understood as systems that do all of the following:

  • Deliver accompanying evidence or reason for output
  • Provide explanations understood by individual users
  • Have output that reflects system process for generating the output
  • Operate only under conditions for which it was designed or when the system reaches a sufficient confidence in its output.

Validate AI models for risks of bias. A mature AI program will include metrics for datasets and models to test for biases and validate risks that may result in or contribute to inaccurate output. Lenders should gauge whether data inputs are appropriate for the model’s purposes and should use explainable techniques like logistical regression to identify potentially protected class proxy variables. Consideration may also be given to the creation of adversarial models to predict protected-class biases of the first model. Such checks may permit model adjustments as needed and reduce bias. Lenders should use risk management processes to review these data input processes and suppress any protected class variables and variable interactions. They may also want to establish internal thresholds in the absence of explicit regulatory guidance.

 

Develop a proactive model risk management program with actionable plans/controls as part of the institution’s overarching compliance plan. Strong policies, controls and governance are fundamental to the effective use of AI in the Fair Lending context. A robust governance framework provides support and structure to risk management functions through policies defining relevant risk management activities, procedures that implement those policies, allocation of resources and mechanisms for evaluating whether policies and procedures are being carried out as specified. Importantly, regulators will expect the extent and sophistication of a bank’s governance function to align with the extent and sophistication of its model use. Unless anti-discrimination measures are actively built in, fair lending pitfalls should be expected. 

 

Keep abreast of AI legal/regulatory developments and in close contact with regulators. Although there are increasingly improving techniques for assessing algorithmic bias, we cannot ignore that there is still incomplete information when assessing the bigger picture. Until regulators issue more definitive, detailed guidance on this subject, even the most advanced modeling and testing technique could receive regulatory scrutiny if it’s not applied or implemented properly in the fair lending context. Through interagency guidance, the agencies provide specific contact information and offer that “firms may choose to consult with appropriate regulators when planning for the use of alternative data.” This implies that continuing contact with regulators directly is an expectation; further, a continuing dialogue with regulators may inform regulator understanding and policy and ultimately have a direct impact on rulemaking. 

 

Reach out to professionals for help and updates. Involve professionals who specialize in compliance management systems for emerging technologies in the fair lending framework. Have them assist with model development, implementation and use. Employ techniques to evaluate data inputs both before and after model processing to assess potential biases and discrimination. If sensible, pursue application of a no-action letter to mitigate the risk of regulatory uncertainty for emerging technologies. 

 

 

 

Gain the benefits, manage the risks

 

AI creates numerous, substantial benefits for both lenders and consumers: it reduces operational processing, potentially creates happier outcomes with more accurate data and invites individuals into the credit process who were credit-invisible under the old regime of manual underwriting and credit bureau reporting. But AI poses risks due to its complexity, lack of accountability, facilitation of covert discrimination via “unprotected inferences” and potential for perverse results based on flawed input data.

 

Although the most recent regulator guidance indicates a priority shift from fostering innovation to regulatory compliant technological processes, the regulatory agencies (especially the CFPB) have not struck the ideal balance between the innovative opportunities of AI and its potential as a tool for malfeasance. Despite the risks and uncertainty AI presents, however, lenders can enable AI to work well for them long into the future by recognizing the pitfalls of alternative data, understanding the true functionality of their own AI, validating their AI models for bias, developing a robust AI risk management program, keeping abreast of changes from regulators and reaching out to third-party professionals for help and assistance with updates.

 

 
 

Contacts:

 
 
 
 

Our banking featured industry insights