- On July 18, 2024, the House Committee on Financial Services' Bipartisan AI Working Group published a report on AI in the financial services and housing markets.
- The report is the culmination of six roundtables with stakeholders from federal agencies and industry, most of whom are eager to continue embracing AI but are concerned about AI’s risks, particularly around discrimination and data privacy.
- The Committee appears focused on increasing oversight activity of AI in the financial services and housing markets but may be slow to consider any legislation, making the passage of any legislation by the Committee this Congress very unlikely.
On July 18, 2024, the House Committee on Financial Services’ Bipartisan AI Working Group released “AI Innovation Explored: Insights into AI Applications in Financial Services and Housing.” As AI becomes increasingly commonplace in the financial services and housing markets, the staff report aims to position the Committee to ensure the safe, effective, and efficient use of AI in these markets.
The report describes the findings from the AI Working Group’s six roundtables focused on AI that it conducted with relevant stakeholders from federal agencies and the industry. Formed in January 2024, the Working Group follows in the footsteps of the Bipartisan Senate AI Working Group, whose AI policy roadmap and proposed legislation we have previously covered. The House Committee’s AI Working Group’s purpose is to “explore how [AI] is impacting the financial services and housing industries,” and how existing and new regulations “consider both the potential benefits and risks associated with AI.”
The report is divided into two sections. The first half details the opportunities, challenges, and risks that relevant stakeholders identified during the six roundtables. The second half details the Committee’s six main takeaways, which include possible legislative and oversight activity.
Industry Stakeholders Identify AI-Related Opportunities, Challenges, and Risks
In early 2024, the AI Working Group convened six roundtables focused on AI with key stakeholders representing both federal agencies and industry. Across the board, the participants were enthusiastic about AI’s potential to innovate their industries but expressed concerns about AI’s risks relating to discrimination and data privacy.
- Federal Regulators – The working group held two roundtables with participants from various agencies, including the Federal Reserve Board of Governors (Fed), the Federal Deposit Insurance Corporation (FDIC), the Securities and Exchange Commission (SEC), the Department of Housing and Urban Development (HUD), and the Consumer Financial Protection Bureau (CFPB).
- Opportunities – Participants from federal agencies explained how AI is used to enforce and ensure compliance with their rules, as well as how the businesses that they regulate use AI for compliance purposes.
- Risks – First, regulators from several agencies expressed “concerns that AI could lead to bias and discrimination and make it harder to deter such outcomes.” During the session, the CFPB “clarified that the use of AI is considered a violation of the Equal Credit Opportunity Act (ECOA) if a lender is unable to explain an adverse outcome using AI.” Secondly, participants also expressed concerns about data privacy and the “quality of input data of Gen AI.” Finally, participants from the Fed “[highlighted] the problematic nature of having a ‘monoculture of models,’ whereby financial institutions all use the same third-party providers.”
- Capital Markets – In March, the Working Group held a roundtable with stakeholders in capital markets, who generally “stated they are taking a measured approach to implementing AI technology in certain aspects of their businesses” and are “still in the early stages.”
- Opportunities – While many participants use AI for “internal, nonpublic-facing aspects of their business,” many participants “are beginning to deploy the technology in other use cases, including public-facing use cases.”
- Risks – First, for many participants, the popularity of certain AI models is a source of concern because certain models, when employed by many players, may make “the same or similar decisions at the same time,” leading to a domino effect or “herd-like behavior in capital markets” that may result in abrupt changes to the stock market. Second, participants identified “AI washing” – when a company “[makes] unfounded claims exaggerating the capabilities of a product or service that is sold as ‘AI’” – as an increasingly common occurrence.
- Housing and Insurance – The Working Group also held a roundtable with stakeholders in the housing and insurance sectors, which have already seen a “major shift in housing and insurance products and services” due to AI.
- Opportunities – For many participants, AI is viewed as an innovative tool for expanding access to credit for more diverse groups of people. AI and advancements in technology also make it possible to examine larger amounts of data, including data that is not traditionally considered by insurance brokers, mortgage lenders, and credit underwriters.
- Risks – First, “in light of historical segregation and discrimination in the housing sector,” many participants discussed concerns that AI may “exacerbate biased or discriminatory outcomes.” Second, many participants expressed concerns around “inadequate, improperly sourced data and consumer privacy,” as well as concerns that AI models may produce “nonsensical or erroneous outputs.”
- Financial Institutions and Nonbank Firms – In May, the Working Group hosted a roundtable with stakeholders from depository institutions and nonbank financial firms who use AI, focusing on specific use cases and applications at different financial institutions.
- Opportunities – According to many participants, AI has the potential to improve every aspect of banking services, “from loan origination to customer service.” AI may potentially prevent discrimination and expand credit opportunities for a more diverse set of borrowers.
- Risks – Many participants expressed concerns about applying AI models in a compliant manner, while avoiding any discriminatory practices and upholding privacy and cybersecurity standards.
- Challenges – The participants identified compliance with risk management guidance as a key challenge they face. Smaller institutions, which often rely on third-party service providers of AI models, may face additional risk and compliance problems, which may be more difficult for these institutions to resolve given their limited resources and bandwidth.
- National Security and Illicit Finance – Lastly, the Working Group held a roundtable “to explore the ways in which AI can impact national security through the financial system,” with participants from a cloud-native cybersecurity firm, a core infrastructure provider for banks, and an AI-powered risk and compliance firm, among others.
- Opportunities – AI is already being used to “detect unusual or suspicious activities in transactions” using larger data sets and analysis that may be “impossible for humans to perform without such augmentation.” AI models, according to some participants, could also create a transaction monitoring system that continuously learns from prior transactions.
- Risks – AI “has armed criminals with a new tool,” which has led to more sophisticated attacks in the financial services sector, according to the participants.
- Challenges – Participants noted the challenge of governing internal AI systems. Many small institutions, in particular, may lack the “bandwidth or resources to meaningfully incorporate AI into their operations.”
Committee Takeaways
Based on the roundtables, the Working Group articulated six takeaways, outlining potential future legislative and oversight activity:
- The Committee should “[oversee] the adoption of AI in the Financial Services and Housing Industries,” with a focus on “[ensuring] integral consumer and investor protections” and preventing bias and discrimination in AI models and decisions made based on such models.
- The Committee should ensure that “regulators apply and enforce existing laws, including anti-discrimination laws, and assess regulatory gaps as market participants adopt AI.”
- The Committee should examine whether “financial regulators have the appropriate focus and tools to oversee new AI products and services.”
- The Committee should “consider how to reform data privacy laws given the importance of data, especially consumer data, to AI.” The report specifically calls out the Gramm-Leach-Bliley Act and the Fair Credit Reporting Act as two laws that may be updated to strengthen data privacy.
- The Committee should examine how AI impacts the workforce, focusing on cases where AI models “can address certain tasks” and allow workers to “focus on other priority projects.”
- The Committee should “ensure US global leadership on AI development and use,” “in light of efforts by authoritarian governments like China to use AI” to curb democracy.
Conclusion
On July 23, the Committee held a hearing on the report with leaders in the financial and housing industries. During the hearing, many representatives indicated their intent to increase oversight activity of AI in the financial services and housing markets.
However, with the House going out of session until after Labor Day, in this election year, any legislative activity by the Committee this Congress is remote. While members of the Committee have introduced or supported four AI-related bills for financial and housing markets, time may have already run out for this Congress to pass any AI legislation, and the Committee is intent on taking its time to craft appropriate, high-quality legislation. “We should be leery of rushing legislation,” Committee Chairman Patrick McHenry (R-NC) said in his opening remarks at the hearing. “It’s far better we get this right, rather than be first. In other words, policymakers should measure twice and cut once,” signaling that at a minimum, it will be the next Congress before any AI-related legislation moves forward toward passage.
Matthew Tikhonovsky, a Mintz Senior Project Analyst based in Washington, DC, also contributed to this article.