On July 23, 2025, President Donald Trump issued Executive Order 14319, which has the stated purpose of preventing the federal government from procuring A.I. “models that sacrifice truthfulness and accuracy to ideological agendas.” The order specifically targets models that incorporate principles of diversity, equity, and inclusion (DEI), asserting that such frameworks may compromise factual accuracy and reliability.
Under the executive order, federal agencies can procure large language models (LLMs) only if they: (1) are “truthful in responding to user prompts seeking factual information or analysis,” “prioritiz[ing] historical accuracy, scientific inquiry, and objectivity, and [] acknowledg[ing] uncertainty where reliable information is incomplete or contradictory,” and (2) are “neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI.”
Although the executive order applies only to federal procurement, A.I. companies may be incentivized to apply these requirements to their products more broadly to avoid missing out on sizable federal contracts. Further, the impact of these changes could have unintended effects. As recent high-profile failed updates to some of the most widely used chatbots and image generators demonstrate, changes “under the hood” of these systems can have unintended consequences as they are rolled out.
To the extent this executive order or similar efforts affect the operation of these platforms, it could impact organizations whose employees use A.I. for people management—with or without approval. As we discussed here, in many organizations, managers may be turning to general purpose A.I. tools like LLMs to assist in consequential personnel decisions, including promotions, compensation adjustments, layoffs, and terminations.
In light of the uncertainty surrounding how the administration’s A.I. policy might affect LLMs and other A.I. tools, it remains as important as ever to limit the use of A.I. for people management to preapproved uses. Employers are also well-advised to vet potential use cases with counsel and establish clear policies and provide training to mitigate the risk that these tools will cause unintended disparate impacts on protected classes.