HB Ad Slot
HB Mobile Ad Slot
Artificial Intelligence, the SEC, and What the Future May Hold
Monday, November 13, 2023

With the growing use of artificial intelligence (AI) in financial markets, broker-dealers and investment advisers need to pay attention to the risks posed by AI on firms’ compliance with federal securities laws. While machine learning is increasingly integrated into today’s financial ecosystem including call centers, compliance systems, robo-advisers, and algorithmic trading, the use of AI systems, and generative AI systems specifically, are developing rapidly.

The combination of AI’s development and integration into our financial markets presents a variety of unique opportunities and challenges for market participants and regulators. With regulatory interest in AI high, broker-dealers and investment advisers should examine whether their existing compliance programs address AI-related risks adequately.

What is Artificial Intelligence?

With so much coverage of AI in 2023, it may seem strange to ask: What is AI? AI is an amorphous term and has been around since the mid-20th century. At its core, AI is a machine/computer that mimics human intelligence and cognition. Most of us are familiar with the “expert system” form of AI that was introduced in the 1980s. Expert systems are programs that solve problems or answer questions by relying on a system of rules coded into the program by experts. While these systems emulate human decision-making ability, they are confined by the limits of their programmed knowledge base. Expert systems are used in making medical diagnoses, computer gaming, accounting software, and more, proving useful in performing specialized tasks but not suited to adaptive problem solving.

Machine and deep learning are more recent forms of AI. Machine learning is AI in which the machine solves certain problems by itself, with little or no the input of human-developed algorithms. This has an obvious advantage over expert systems because machine learning is not dependent on a programmed knowledge base, and it does not require the laborious development of algorithms by human programmers. Machine learning-based AI, however, may require human intervention to learn and differentiate between data inputs—like how a website may prompt you to choose a bike from a series of photos to verify you are human. These models follow three primary forms of learning: supervised, unsupervised, or reinforcement. Under the supervised learning model, human operators train the model with pre-classified datasets to help it learn how to classify data or predict outcomes. Under the unsupervised learning model, the machine will analyze unclassified datasets to discover patterns or outcomes without human intervention. Finally, in reinforcement learning, the machine is trained through trial-and-error by seeking a pre-determined action or outcome.

Deep learning is a more advanced version of machine learning. Deep learning relies on neural networks, comprised of node layers, which emulate neurons sending signals in a brain. The nodes have at least three layers: an input layer, one or more hidden layers, and an output layer. The nodes can receive data, analyze it, and formulate an output. In this way, deep learning AI can answer more complex questions than prior forms of AI. Deep learning can digest raw data without the need for human intervention to help it differentiate between data inputs. This gives deep learning-based AI an advantage over machine learning, especially when working with large datasets. Users can also combine deep learning with reinforcement learning to analyze large datasets and optimize outcomes from that large data. As AI improves, so-called deep reinforcement learning systems can more closely emulate the reward-seeking behaviors of the human brain. This promises both exciting and frightening possibilities for regulated entities and the financial markets.

Current AI technology is limited to artificial narrow intelligence (ANI), which is designed to perform a single or narrow set of tasks. Although we have witnessed tremendous progress in machine and deep learning, AI is still limited to performing just certain tasks with increased proficiency. Take, for example, OpenAI’s ChatGPT. ChatGPT is a generative multimodal model limited to producing natural language responses. And, as lawyers should know, ChatGPT can occasionally produce unreliable results.

Researchers like OpenAI are working towards the development of true artificial general intelligence, or systems that are designed to accomplish any intellectual task that a human can perform. Beyond this, AI may eventually surpass human intelligence and achieve artificial superintelligence. The generative AI models of today will likely look like a primitive AIM chatbot in 20 years.

Artificial Intelligence in Financial Markets

Broker-dealers and investment advisers have long used AI tools in the financial markets. AI-based applications have proliferated for uses such as operational functions, compliance functions, administrative functions, customer outreach, or portfolio management. Chatbots, for example, provide efficient and easily accessible assistance to clients, and robo-advisers analyze markets and provide investment recommendations to investors.

Trading models are another popular area where firms deploy AI technology in financial markets. Quantitative traders have used algorithmic models to identify investments or trade securities since the 1970s. Increasingly, however, broker-dealers and investment advisers utilize more advanced machine learning for these purposes. Several firms have introduced AI-powered robo-advisers to investors. For instance, JPMorgan has publicly announced its development of “IndexGPT,” an AI advisor to analyze and select securities for individual investors’ portfolios.

Risks of Artificial Intelligence

Recent advances in AI raise novel risks for broker-dealers and investment advisers. These risks include, but are not limited to, the following:

  • Conflicts of Interest. The risk for identified or new conflicts of interest might increase with the use of AI, especially with the use of robo-advisors or chatbots. AI programs could make investment recommendations to investors that are more profitable to the firm. Firms may not even understand how or why AI programs are making these recommendations. With the sophistication of modern AI programs, users or programmers may not fully understand the decision-making process of the AI. The risk for a conflicted output rises when the determination of a recommendation is not readily transparent and explainable.
  • Market Manipulation. AI trading programs powered by machine and deep learning may learn how to manipulate markets. An AI program designed to achieve profits without limitations or faulty limitations may learn how to manipulate markets, for example, by “spoofing” the market or executing “wash sales.” Trading algorithms already caused the “flash crash” in 2010. It is not far-fetched for AI to start manipulating markets in the near future.
  • Deception. Generative AI is already notorious for being used to create “deepfakes,” which are realistic images, videos, or audio based on real people. Bad actors can utilize these digital forgeries to cause havoc in the financial markets by imitating market leaders and delivering fake news, for example. Or deepfakes can target specific persons by imitating superiors or customers to seemingly authorize actions regarding a customer’s account.
  • Fraud. As AI becomes more integrated into firms’ investment recommendations and trading decisions, there is a greater risk that bad actors will use confidential customer trading data for their own ends. For example, bad actors might build a proprietary AI trading program that uses customer trading data to front-run or trade ahead of potentially market moving trades.
  • Data Privacy. AI programs have access to wide swaths of data, including, potentially, personal customer data. AI systems analyze this personal data to learn and/or make decisions or determine outcomes. The collection and analysis of large swaths of personal data raises concerns about how that data is used and who has access to the data.
  • Discrimination. The possibility for unfair treatment, bias, and discrimination is ripe when AI learns from human data. When learning from data saturated with historical biases or racism, AI is likely to pick up its own biases. This phenomenon is already well documented. A 2018 study, for instance, found facial recognition programs performed poorly on people of color.1 From chatbot interactions, hiring decisions, or investment recommendations, an AI’s own bias could cloud its judgment and skew outcomes in unlawful or undesirable ways.

Safeguarding AI-Related Risks

Broker-dealers and investment advisers are subject to a variety of regulations implicated by the use of AI. Broker-dealers’ and investment advisers’ advice and recommendations must be in the best interests of their clients and they cannot place their own interests ahead of investors’ interests.2 Broker-dealers and investment advisers also have overarching obligations to adopt and implement written policies and procedures reasonably designed to prevent violations of the federal securities laws.3 Broker-dealers and investment advisers also have an obligation to safeguard customer records and data.4 Further, federal securities laws prohibit fraudulent conduct by broker-dealers and investment advisers.

SEC Actions

The U.S. Securities and Exchange Commission (SEC) has already taken note of the risks posed by AI. On July 26, 2023, the SEC proposed new rules to address risks that AI utilizing predictive data analytics will place a firm’s interests ahead of investors’ interests.5 Under the SEC’s proposed rule, broker-dealers and investment advisors would have to evaluate any use or reasonably foreseeable potential use of “covered technologies” in any investor interaction, identify conflicts of interest where the use of the covered technology would place the firm’s interests ahead of investors’, and eliminate or neutralize the effect of those conflicts. The SEC defined a covered technology as, “an analytical, technological, or computational function, algorithm, model, correlation matrix, or similar method or process that optimizes for, predicts, guides, forecasts, or directs investment-related behaviors or outcomes.”6 This covers a wide swath of AI technologies already in use, such as tools that analyze investor behavior to provide curated research reports or AI that provides tailored investment recommendations.

The SEC’s Division of Examinations is not waiting for the finalization of this proposed rule. As discussed in its 2024 Examination Priorities, this SEC Division has already established a specialized team to address emerging issues and risks in this area. The 2024 Examination Priorities specifically advise SEC registrants, such as broker-dealers and investment advisers, as follows:

We also established specialized teams within our different examination programs, allowing us to better address emerging issues and risks associated with crypto assets, financial technology, such as artificial intelligence, and cybersecurity, among others. Finally, we continued to strengthen our leadership team by bringing onboard a number of key senior and advisory positions and building additional capacity in various examination programs to keep pace with the rapidly developing market ecosystem consistent with Congress’ fiscal year 2023 appropriation. (Emphasis added.)7

The Division of Examinations continued:

The Division remains focused on certain services, including automated investment tools, artificial intelligence, and trading algorithms or platforms, and the risks associated with the use of emerging technologies and alternative sources of data.

SEC Chair Gary Gensler has pursued aggressive examination, enforcement, and regulatory agendas. By all indications, he intends to treat the intersection of AI and federal securities laws no differently. His speech nine days before the proposed rule was released revealed that he is as well-versed in this area (and in technology generally) as any leader of a U.S. financial regulator.8

Looking Forward

Given the current and proposed regulatory framework, it is vital for broker-dealers and investment advisers to have a firm understanding of the AI tools they use and then implement appropriate policies and procedures for those AI tools. Firms should not wait to assess their use of AI, including future use, and put guardrails in place to ensure customers are protected and the firms satisfy all regulatory expectations.

Firms should begin by assessing what AI technology they are actually using or plan to use. After this is complete, assessing whether such use presents any conflicts of interest, potential customer harm, or violation of applicable rules and regulations is recommended. Firms should also consider keeping an inventory of all the AI applications they use, the risks posed by each AI application, and mitigating controls to address each AI-related risk.

Next, firms should implement and periodically review their written policies and procedures to address AI governance and the regulatory risks posed by AI. Any existing policies and procedures may be similarly enhanced to address conflicts of interest related to AI, potential customer harm, and potential regulatory violations. For example, firms may determine to be deliberate and intentional about their use of any new AI systems, explicitly requiring review and assessment of such AI before personnel are permitted to use it. Further, supervision by cross-function teams and periodic testing is also helpful to understand how the AI systems are performing.

Firms should also consider reviewing their contracts with customers to assess whether the firm or its vendors have the requisite rights to use or share the data. Separately, firms may also want to evaluate their contracts with their vendors to see what protections the firm has with respect to the vendor’s use of the data shared by the firm as well as the services received from the vendor.

For robo-advisers and AI tools that provide investment recommendations or advice, firms should pay particular attention to the “explainability” of the AI’s recommendations. As AI becomes more advanced, the decision-making process may subsequently become more opaque. As evidenced by the Biden Administration’s recent Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, auditability, transparency, and explainability of AI systems (whether or not developed by the user) are all critical aspects of appropriate AI development and use.9  Accordingly, it is important to try to understand the AI decision-making process and implement appropriate guardrails where needed. This could include periodic testing of robo-advisers, human oversight of recommendations, or limitations on the recommendations. It is similarly important for firms to ensure that AI technology is not placing the firm’s interest ahead of investors’ interests. Testing and review will help ensure that broker-dealers and investment advisers maintain compliance with federal securities laws and stave off risks of significant examination findings, referrals to the SEC’s Division of Enforcement, costly litigation, and the corresponding reputational damage to firms and firms’ stakeholders.

Firms should also place importance on safeguarding and monitoring how AI systems use customer data by adopting policies and procedures to ensure that AI tools do not misappropriate customer data for the firms’ own use in trading and restrict who has access to the customer data. Organizations should also consider updating their written policies and procedures to reflect what customer data they collect, how that data is used, how the data is shared, and whether appropriate customer consent has been obtained. Finally, regardless of AI use, firms must safeguard against cybersecurity breaches.

The growth of AI in the financial markets will lead to more attention from regulators regarding the use of AI. Accordingly, broker-dealers and investment advisers should begin to assess their use of AI, including future use, and put in guardrails to ensure that their customers are protected.

 


1 See Buolamwini, J. and Timnit Gebru, T. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Proceedings of Machine Learning Research, 2018.

2 See Regulation Best Interest: The Broker-Dealer Standard of Conduct, Exchange Act Release No. 86031, 84 FR 33318 (June 5, 2019); Commission Interpretation Regarding Standard of Conduct for Investment Advisers, Investment Advisers Act Release No. 5248, 84 FR 33669 (June 5, 2019).

3 See Compliance Programs of Investment Companies and Investment Advisers, Release No. IA-2204, 68 FR 74713 (Dec. 24, 2003); FINRA Rule 3110.

4 See Privacy of Consumer Financial Information (Regulation S-P), Exchange Act Release No. 42974, 65 FR 40333 (June 22, 2000).

5 Proposed Rule, Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers, Exchange Act Release No. 97990 (July 26, 2023).

6 Id. at 42.

7 See 2024 Examination Priorities.” U.S. Securities and Exchange Commission, Division of Examinations, 15 October 2023.

8 See Gensler, G. “Isaac Newton to AI” Remarks before the National Press Club.” U.S. Securities and Exchange Commission, 17 July 2023.

9 Exec. Order No. 14,110, 88 Fed. Reg. 75,191 (Oct. 30, 2023).

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins