Cyber executive fraud scams have been rampant for years. These scams trick an employee into transferring large sums of money into the fraudster’s bank account. In the past, these often involved using a high-level executives hacked email account (or an email appearing to be from them) to request the employee to quickly and secretly transfer money for a ‘special project’ that no one else should know about. They play on an employee’s desire to please the requesting executive and their unique position to quickly do so. It used to be the average value of these were around US$100,000. But they have been steadily growing more sophisticated and costly, often involving the hackers doing a detailed inspection of the executive’s email to identify information to make the request sound more believable (such as to determine current projects, confirm when the executive is likely to be unavailable for a call, and even to be able to craft the email to sound more like the executive).
Recently, the risk grew far greater as it was reported a deep fake videocall showing an AI-generated multi-national company’s CFO and other co-workers were used to convince a HK branch employee to make 15 transfers totaling HK$200M (approximately US$25M) into 5 local HK bank accounts. Reports indicate that the initial email request seemed suspicious to the employee, but then she was invited to a videochat, purportedly over a common personal communications app, where the deep fake of the CFO, and apparently or other employees, were used to instruct her to make the transfers. The deep fakes were apparently AI-generated videos created from past videochat recordings obtained from the individuals. From the reports, the deepfakes were more like a recording and would not be able to interact and respond to questions and may have had somewhat limited head movement. It would appear that at least one of the hackers was a live participant orchestrating things so, after allowing the HK employee to introduce herself, the deepfake images informed her to make the transfers. It was only after the 15 transfers were made that the employee contacted their UK headquarters, only to be informed there was no such instruction.
Gen-AI also seems involved in other incidents where deep fake images of individuals contact their loved ones to request ‘urgent funds’, including indicating they have been kidnapped or otherwise in dire need. Further, we are increasingly seeing deep fake images of celebrities and even public officials. Moreover, it appears hackers are using AI to sift large digital data to identify more convincing approaches for their scams as well as weaknesses in weaknesses in software coding or network security.
So, what can a company do to avoid such a loss? Here are some things that can help:
- Awareness and Training: It is essential to make employees, especially those with the ability to transfer money, aware that such sophisticated fraud exists. Just like with phishing email, where we train employees to be suspicious when the email does not look right (such as misspellings, email address not correct or sent from outside the organization, etc), financial staff should strongly suspect any immediate secret request for such a transfer, especially large ones. In this case, the platform used for the video was likely not the usual internal company communications platform, but instead a personal communications platform. In addition, the employee could have asked questions on the call that would have been outside the deepfakes recorded content.
- Separately Verify the Instructions: without using links provided in the email, the employee should contact the executive and/or others on the call directly, if not in person at the office, the perhaps by making a phone call to the number provided in the corporate directory.
- Implement Protocols to Prevent: Companies should implement procedures to prohibit large financial transactions happening without multiple executive approval. Companies may also issue signed encryption keys to appropriate employees before such transactions are approved. Just as it is now common to use two-factor authentication to allow an employee computer access, why shouldn’t we also do so to allow a large financial transaction to occur? Also, ensure robust passwords are being used. A lot of hacks are currently occurring by the use of lists of leaked individual’s passwords that have been re-used by the individual over multiple accounts.
- Broaden the Scope of Concern: This fraud involved financial assets. However, it would not be difficult for the fraudsters to seek instead key business secrets and/or key customer data. For example, the ‘executive’ could as easily indicate they are away and unable to access the network, asking the employee to send them an attachment of all key clients or the latest business plan, or similar. Businesses should take steps to identify such key data and secure it from being provided through such fraud, such as limiting access, restricting export and encrypting the contents.
- Act Quickly: Especially when financial assets are involved, as soon as possible after determining you have been scammed, reach out to your bank and the bank the funds were transferred to, asking them to halt the transaction and freeze the funds while you seek a judicial order to have them returned. On several occasions, our firm has worked with defrauded individuals in HK and elsewhere to recover funds left in the bank accounts where the money was initially transferred. In one case, although we were only brought into the matter even a week after the transfer of US$1m, we were able to recover nearly half of the funds. Most likely this is because the onward transfer of large sums of money, especially abroad, often raises red flags in the banking system, making the hackers move the money more slowly.
- Don’t Quit Too Soon: When improper access to email and/or the network has been identified, make sure you do a detailed review, likely utilizing a cyber investigations expert, to ensure where the hacker went in the system, not built in backdoors, stolen other information, or otherwise affected the system. Simply changing the affected email password is very likely insufficient to protect your system. Further, in many countries, data privacy and other laws will require a detailed assessment of what happened in order to determine whether any data privacy or other notifications are necessary.
In the end, although the increase in use of AI to enhance cyberfraud is clearly troubling, by doing the above you can take effective steps to help prevent them from impacting your company. Should you have any questions, please feel free to reach out to your Squire Patton Boggs contact or one of our global AI contacts listed here.