HB Ad Slot
HB Mobile Ad Slot
A Deepfake of a Baltimore High School Principal Raises Significant Employment Issues
Thursday, May 9, 2024

As reported by CNN, a high school principal in Pikesville, Maryland, found his life and career turned upside down when in January a recording suggesting the principal made racially insensitive and antisemitic remarks went viral. The school faced a flood of calls from concerned persons in the district, security was tightened, and the principal was placed on administrative leave. No doubt, a challenging situation for any human resources executive, one made far more difficult because of AI.

An investigation ensured, all the while the school principal maintained that he did not make the statements in the recording – it was not his voice, he claimed. Of course, the “recording” was good enough to put the school district on edge.

It was not until months later, in late April, that a Baltimore County Police Department investigation concluded that the recording was a fake, a “deepfake,” generated by artificial intelligence (AI) technology. As reported by CNN, Baltimore’s County Executive, Johnny Olszewski, observed:

“Today, we are relieved to have some closure on the origins of this audio…However, it is clear that we are also entering a new, deeply concerning frontier.”

Deepfake AI is a type of artificial intelligence used to create convincing images, audio and video hoaxes. Although deepfakes might have some utility, such as for entertainment purposes, they blur the lines between reality and fiction, making it increasingly difficult to discern truth from falsehood. As in the case of the Baltimore school principal, misuse raises significant concerns, particularly in the workplace. It turns out that the deepfake recording may have arisen from an employment dispute that the principal was having with the high school’s athletic director.

The US Department of Homeland Security and other agencies have recognized the threat deepfakes present. At the same time, the technology is getting easier and easier to use and harder to identify. In this case, it took three months for the Baltimore County Policy Department to investigate and make a determination about the recording.

The World Economic Forum’s “4 ways to future-proof against deepfakes in 2024 and beyond” offers a sobering suggestion for dealing with deepfakes – zero-trust.

This mindset aligns with mindfulness practices that encourage individuals to pause before reacting to emotionally triggering content and engage with digital content intentionally and thoughtfully.

This may not be the mindset most HR professionals prefer to have at or near the top of their lists. But in this context, when presented with electronic material or even a photograph from an unknown source, despite how real it might appear, intentionality and thoughtfulness should prevail.

Consider being presented, as here, with a video, a recording, or some other image, photograph, or transcribed conversation, containing insensitive remarks purportedly made by an employee about another’s race, religion, gender, etc. An organization might not have a police department that is willing and able to assist, although they might have just as much pressure in the workplace from persons reacting to the content, believing it is authentic when it may be nothing more than a fake. Having an internal plan outlining a process for investigation and resources (internal or external) lined up to evaluate the material would help to ensure that intentionality and thoughtfulness. Such a plan might also guide decision making around the various employment decisions to be made along the way, including internal and external communications, should the investigation carry on.

This is only the beginning.

HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins