HB Ad Slot
HB Mobile Ad Slot
New York State Department of Financial Services Releases Guidance on Combating Cybersecurity Risks Associated With AI
Tuesday, November 12, 2024

On October 16, 2024, the New York State Department of Financial Services (NYDFS) released guidance highlighting the cybersecurity risks associated with artificial intelligence (AI) and how covered entities regulated by NYDFS can mitigate those risks.

Quick Hits

  • The New York State Department of Financial Services (NYDFS) issued guidance explaining how covered entities should use the existing cybersecurity regulations to assess and address the cybersecurity risks associated with AI.
  • The guidance presents four “concerning threats” associated with AI: two are caused by threat actors’ use of AI and the other two are caused by covered entities’ use of AI.
  • The guidance discusses how covered entities can use the framework in the NYDFS cybersecurity regulations to combat the enhanced risks created by AI, including with respect to risk assessments, vendor management, access controls, employee training, monitoring AI usage, and data management.

In response to inquiries about how AI affects cybersecurity risk, the NYDFS released guidance to address the application of the cybersecurity regulations to risks associated with AI. Although the guidance is directed to financial institutions and other covered entities regulated by the NYDFS, it serves as a useful guide for companies in any industry.

Cybersecurity Risks Associated With AI

The guidance presents two primary ways that threat actors have leveraged AI to increase the efficacy of cyberattacks:

  • AI-Enabled Social Engineering. To convince authorized users to divulge nonpublic information about themselves and their businesses, threat actors are relying on AI to create deepfakes (i.e., artificial or manipulated audio, video, images, and text) to target these individuals via email, telephone, or text message. Threat actors have used AI-enhanced deepfakes to successfully convince authorized users to divulge sensitive information and to wire funds to fraudulent accounts.
  • AI-Enhanced Cyber Attacks. Because AI can scan and analyze vast amounts of information much faster than humans, threat actors are using AI to amplify the potency, scale, and speed of cyberattacks. With the help of AI, threat actors can (a) access more systems at a faster rate; (b) deploy malware and access/exfiltrate sensitive information more effectively; and (c) accelerate the development of new malware variants and ransomware to avoid detection within information systems.

The guidance also presents two risks caused by a covered entity’s use of AI:

  • Exposure or Theft of Vast Amounts of Nonpublic Information. AI products used by businesses collect and process substantial amounts of data, which may include nonpublic information and biometric data. As a result, threat actors have a greater incentive to target covered entities using AI because these entities maintain this larger data pool.
  • Increased Vulnerabilities Due to Third-Party, Vendor, and Other Supply Chain Dependencies. Covered entities often work with outside vendors and third-party service providers for AI products, and may permit these third-party vendors to collect vast amounts of company information. Each third party in this information supply chain is a potential vulnerability and could expose an entity’s nonpublic information in the event of a cybersecurity incident.

Suggested Actions for Protecting Against AI Threats

The NYDFS guidance outlines how covered entities can apply the existing cybersecurity regulations to address key threats associated with AI. These are not required actions, but are meant to assist covered entities to both comply with the cybersecurity regulations and mitigate the increased cybersecurity risks posed by AI:

  • Risk Assessments and Risk-Based Programs, Policies, Procedures, and Plans. When covered entities design their risk assessments, they should address AI-related risks in the following areas: the organization’s own AI use; the AI technologies used by third-party service providers and vendors; and any potential vulnerabilities stemming from AI applications that could pose a risk to the confidentiality, integrity, or availability of the entities’ systems or nonpublic information.
  • Third-Party Service Provider and Vendor Management. When covered entities are conducting due diligence on third-party service providers, the NYDFS recommends that covered entities consider the threats facing third party service providers from AI and AI-enabled products or services; how those threats, if exploited, could impact the covered entity; and how the third-party service providers protect themselves from such threats. Entities should consider (i) requiring third-party service providers to provide notification to the covered entity of any AI-related cybersecurity event; and (ii) incorporating appropriate representations or warranties in agreements with third-party service providers.
  • Access Controls. The NYDFS cautions that some multifactor authentication (MFA) methods are more vulnerable to AI deepfakes and other AI-enhanced attacks. The NYDFS discourages covered entities from using certain common MFA methods and encourages covered entities to consider (i) employing technology with liveness detection or texture analysis to verify that a print or other biometric factor comes from a live person; or (ii) using more than one biometric modality at the same time, such as a fingerprint in combination with iris recognition, or fingerprint in combination with user keystrokes and navigational patterns.
  • Cybersecurity Training. The guidance presents numerous ways to improve cybersecurity training. This includes training on how threat actors are using AI in social engineering attacks, how AI is being used to facilitate and enhance cyberattacks, and how AI can be used to improve cybersecurity. If the company is deploying AI tools, the NYDFS encourages covered entities to train relevant personnel on how to secure and defend AI systems from cybersecurity attacks, how to design and develop AI systems securely, and how to use AI-powered applications to avoid disclosing nonpublic information.
  • Monitoring. Covered entities that use AI-enabled products or services or allow personnel to use AI applications such as ChatGPT, should consider monitoring for unusual search behaviors that might indicate an attempt to extract nonpublic information and blocking searches from personnel that might expose nonpublic information to a public (or open) AI product or system.
  • Data Management. Covered entities should implement data minimization practices that includes data used for AI purposes. If a covered entity uses AI or relies on a product that uses AI, it should identify all information systems that use or rely on AI, implement appropriate controls, maintain an inventory of all such systems, and prioritize implementing mitigation measures for those systems that are critical for ongoing business operations.

Next Steps

Although the recent guidance does not impose any new requirements, it is likely that NYDFS will consider amending the cybersecurity regulations to address AI-related risks. Further, while the guidance is directed to financial institutions and other covered entities regulated by NYDFS, all companies may want to consider AI-related risks to their information systems both from threat actors using AI and the companies’ use of AI or AI-enhanced products.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins