HB Ad Slot
HB Mobile Ad Slot
DOJ Confronts the Opportunities and Obstacles of AI
Tuesday, February 27, 2024

In remarks given at the University of Oxford on February 14, 2024, Deputy Attorney General (“DAG”) Lisa O. Monaco announced the Department of Justice’s (“DOJ”) latest approach to artificial intelligence: naming the DOJ’s first Chief Artificial Intelligence (“AI”) Officer. 

The inaugural position will advise Attorney General Merrick Garland and the rest of the DOJ on the fast-evolving scientific developments in AI by leading an Emerging Technology Board comprised of law enforcement, civil rights officials, and other experts in AI-related emerging technologies. 

In her remarks, Monaco acknowledged the current “inflection point with AI” and described AI as a “double-edged sword” that “may be the sharpest blade yet.” Monaco noted, on one hand, the immense threat that AI poses to our security. She explained how AI can create and perpetuate cyber-crimes, amplify existing biases and discriminatory practices, and spread disinformation and repression. 

On the other hand, Monaco described how AI presents a new opportunity for ways to “identify, disrupt, and deter” criminals and terrorists. The DOJ has already implemented AI technology to accomplish various objectives – stemming the opioid crisis, responding to the over one million tips submitted to the FBI every year, and synthesizing evidence in cases, including the investigation into January 6th Attack on the Capital. Now, the DOJ will also pursue harsher penalties in cases where the threat of misconduct was greater because of the misuse of AI. Monaco explained this latest objective by comparing these harsher penalties to those imposed against criminal offenses conducted with firearms. Although Monaco’s preview provided little guidance as to how exactly the government will pursue these penalties, she did suggest amending sentencing guidelines if those currently in place do not sufficiently address the harms caused by crimes involving AI misuse. 

The DOJ’s step towards ethically embracing AI comes after President Joe Biden signed an executive order (“E.O.”) last October on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The E.O. establishes a government-wide effort to guide responsible AI development and deployment through federal agency leadership, regulation of industry, and engagement with international partners. According to the White House Fact Sheet the executive order requires “companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests.” The E.O. intends to ensure that before AI systems are made public by companies, systems are safe, secure, and trustworthy.

Final Thoughts

AI is here to stay, and so is government oversight of AI. Although the DOJ has “just scratched the surface” of implementing AI technology, it’s clear that the DOJ intends to stay involved in the future of AI. As scientists rush to develop AI and legislators rush to regulate the scope of AI, organizations should evaluate their use and reliance on AI regularly.

Bianca Bernardi is a Law Clerk acting under the supervision and guidance of Members of the New York office. 

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins