In its proposed disparate impact rule published in the Federal Register, HUD sets forth a framework for making (and defending against) claims of disparate impact under the Fair Housing Act. A new and unique aspect of the proposed rule–its treatment of mathematical models (like risk-scoring models used in the credit industry)–warrants a closer look.
In proposed section 100.500(c)(2), HUD provides three avenues of defense when a disparate impact claim is made based on the use of a “model … such as a risk assessment algorithm.” The proposed rule allows a defendant to prevail when it can establish any of three alternative defenses:
-
If the defendant “[p]rovides the material factors which make up the inputs used in the challenged model and shows that these factors do not rely in any material part on factors which are substitutes or close proxies for protected classes under the Fair Housing Act and that the model is predictive of credit risk or other similar valid objective.”
-
If the defendant “[s]hows that the challenged model is produced, maintained, or distributed by a recognized third party that determines industry standards, the inputs and methods within the model are not determined by the defendant, and the defendant is using the model as intended by the third party.”
-
If the defendant “[s]hows that the model has been subjected to critical review and has been validated by an objective and unbiased neutral third party which has analyzed the challenged model and found that the model was empirically derived and is a demonstrably and statistically sound algorithm which accurately predicts risk or other valid objectives, and that none of the factors used in the algorithm rely in any material part on factors which are substitutes or close proxies for protected classes under the Fair Housing Act.”
Overall, the proposed rule appears to provide a relatively straightforward path for review of FHA disparate impact claims related to scoring models. For a model developed by the defendant itself (or for which it has information on attributes and performance), the first and third defenses would only require the defendant to show that it (or an objective and neutral third party) looked at the variables in the model, that none of them is a “substitute” or “close proxy” for a protected characteristic, and that the model as a whole “is predictive of credit risk or other similar valid objective.” Note that here, the assessment of predictiveness is on the model as a whole, not on any individual variable. This would seem to make fair lending testing much easier than a requirement that each attribute be shown to be predictive.
The second defense is interesting because it appears to be addressed to situations in which the defendant is using a model provided by a third party, and for which the defendant may not have access to information about attributes or performance. However, if the use of the model is an “industry standard,” the defendant is relieved from liability if it uses the model “as intended by the third party” that created it. It appears that the second defense could apply in a variety of situations, including when a mortgage lender uses the automated underwriting systems of Fannie Mae or Freddie Mac.
Although the thrust of these provisions seems to be a desire to provide a streamlined path for defendants to address disparate impact claims based on algorithmic models, the phrasing of the provisions still leaves some room for questions (and later interpretation). For example, the proposed rule does not define a “substitute” or “close proxy” for a protected characteristic, which invites divergent views on whether a particular model input is permissible or not. In addition, whether a model is provided by a “recognized third party that determines industry standards” seems to be somewhat ambiguous. Would it cover any scoring product offered by a large, national provider? Or is something more than that required to show that it is an “industry standard”?
We believe the treatment of models under the proposed HUD rule is a step in the right direction, but believe that the final rule would be made clearer by resolving these ambiguities. Regardless, the proposed rule would make it difficult for plaintiffs to advance disparate impact claims based on models, because it puts the focus where it should be: on whether the models directly discriminate on the basis of a protected characteristic, and whether they are predictive of credit risk or another business justification.