HB Ad Slot
HB Mobile Ad Slot
Why Executive Teams Should Prepare for the Cybersecurity and Fraud Risks of Deepfakes
Friday, September 20, 2024

The widespread availability of Artificial Intelligence (AI) tools has enabled the growing use of “deepfakes,” whereby the human voice and likeness can be replicated seamlessly such that impersonations are impossible to detect with the naked eye (or ear).

These deepfakes pose substantial new risks for commercial organizations. For example, deepfakes can threaten an organization’s brand, impersonate leaders and financial officers, and enable access to networks, communications, and sensitive information.

In 2023, the National Security Agency (NSA), Federal Bureau of Investigations (FBI), and Cybersecurity and Infrastructure Security Agency (CISA) released a Cybersecurity Information Sheet (the “Joint CSI”) entitled “Contextualizing Deepfake Threats to Organizations,” which outlines the risks to organizations posed by deepfakes and recommends steps that organizations, including national critical infrastructure companies (such as financial services, energy, healthcare and manufacturing organizations), can take to protect themselves. Loosely defining deepfakes as “multimedia that have either been created (fully synthetic) or edited (partially synthetic) using some form of machine/deep learning (artificial intelligence),” the Joint CSI cautioned that the “market is now flooded with free, easily accessible tools” such that “fakes can be produced in a fraction of the time with limited or no technical expertise.” Thus, deepfake perpetrators could be mere amateur mischief makers or savvy, experienced cybercriminals. 

Recognizing these emerging threats, certain states already have enacted laws, or have introduced bills, aimed at regulating the use of deepfakes leading up to an election or in connection with the spread of nonconsensual intimate images. At the federal level, a bill regulating deepfake pornography has been advanced, and the Federal Communications Commission has proposed rules regulating deepfakes in political advertising. These pieces of legislation, however, are limited in scope and application and do not address myriad other ways deepfakes can wreak havoc upon an organization.

The Joint CSI highlighted two recent examples of reported deepfake threats. In one instance, an unknown actor targeted a company using synthetic visual and audio media techniques to impersonate the company’s CEO, where, for malicious purposes, the “CEO” invited a product line manager via WhatsApp to an interactive call and then impersonated the CEO’s voice. In another, the threat actor impersonated a company executive’s voice on WhatsApp and suggested a Teams meeting, where the screen appeared to show the company executive, and the threat actor attempted to trick the employee into sending a wire transfer. In a similar case, after the Joint CSI was published, CNN reported that a finance worker at a multinational firm agreed to pay $25 million to a fraudster as a result of deepfake technology used to impersonate the company’s CFO.

These examples demonstrate how deepfake technology opens up new avenues for malicious actors to exploit organizations. It is reasonable to expect that as these technologies become more readily accessible, organizations will increasingly be targeted with deepfakes to commit fraud, launch “denial of service” attacks to prevent access to their services or products, or cause damage to reputation and product. These attacks will likely target the organization’s executive and financial teams.

Executive teams should prepare, therefore, for deepfakes just like any other cybersecurity or fraud attack, including through monitoring, workforce training and the implementation of incident response plans. In April 2024, the National Institute for Standards and Technology (NIST) published draft guidance, NIST AI 100-4, entitled “Reducing Risks Posted by Synthetic Content.” The draft guidance highlights that synthetic content—such as deepfakes—can “produce concentrated fraud and social engineering, and impose financial costs on victims of these schemes” and sets forth the steps that can be taken to mitigate the risk of such an attack. Although still in draft form, the guidance provides a helpful summary of the current state of the relevant technology to protect and defend against deepfakes, including use of synthetic image, video and audio detection tools. Organizations should also consider implementing a strategy to protect the authenticity and integrity of their own content (e.g., images, text, audio, video), such as digital watermarking and fingerprinting/cryptographic hashing (using the file’s underlying metadata). Among other things, as NIST points out, use of these technologies may have the significant benefit of enabling organizations to quickly debunk any claims that synthetic generated content is authentic.

Similarly, the Joint CSI advises that organizations should consider implementing a number of existing technologies (including commercially available tools) to detect deepfakes and determine media provenance, including real-time verification capabilities, passive detection techniques, and protection of high priority officers and their communications.

In addition to the technology solutions that will assist an organization in detecting and debunking deepfake attacks, it is important for an organization to prepare for the reputational impact that a deepfake may cause on the organization’s key managers and products, even if that deepfake attack is ultimately debunked. In our view, it is essential that an organization have a deepfake response public relations communications plan. Such a plan would not only involve communications with those impacted stakeholders, including perhaps existing and potential customers, shareholders and employees, but it would also address the initial reporting of the incident to relevant law enforcement authorities. Indeed, we strongly recommend that the nature of such response be planned ahead of time and practiced by the organization’s response team.

A good communication plan can help limit confusion (both publicly and internally) in the attack’s aftermath as questions and conjecture arise as to whether deepfake content is genuine and increase responsiveness across the internal organization by sharing action plans, updating stakeholders, and providing transparency throughout the process of responding to the deepfake incident. The plan should identify those authorized to speak about the incident, a range of potential communication channels, the schedule of communication as well as procedures for notifying external organizations (e.g., partners, customers, consumers, etc.) that are directly involved in or impacted by the deepfake incident.

Finally, preparation for deepfake attacks and instituting mitigation measures may also be required under cybersecurity and privacy laws and regulations requiring organizations to safeguard protected information.

Organizations should ensure, therefore, that legal and operational strategies and plans are in place and tested to respond to a variety of deepfake techniques.

HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins