HB Ad Slot
HB Mobile Ad Slot
NIST Publishes Artificial Intelligence Risk Management Framework
Wednesday, February 1, 2023

On January 26, 2023, the National Institute of Standards and Technology (“NIST”) released guidance entitled Artificial Intelligence Risk Management Framework (AI RMF 1.0) (the “AI RMF”), intended to help organizations and individuals in the design, development, deployment, and use of AI systems. The AI RMF, like the White House’s recently published Blueprint for an AI Bill of Rights, is not legally binding. Nevertheless, as state and local regulators begin enforcing rules governing the use of AI systems, industry professionals will likely turn to NIST’s voluntary guidance when performing risk assessments of AI systems, negotiating contracts with vendors, performing audits on AI systems, and monitoring the use AI systems.

NIST broadly defines an “AI system” as an “engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments.” This broad definition covers many of the commonly used AI-based hiring and recruitment products, such as resume screening software and gamified assessment or selection tests.

The AI RMF is divided into two parts. Part One includes foundational information about AI Systems, including seven characteristics of trustworthy AI systems:

  • Valid and reliable – AI systems can be assessed by ongoing testing or monitoring to confirm that the system is performing as intended.

  • Safe – AI systems should not, under defined conditions, lead to a state in which human life, health, property, or the environment is endangered.

  • Secure and resilient – AI systems and their ecosystems are resilient when they are able to withstand unexpected adverse events or changes in their environment.

  • Accountable and transparent – Information about an AI system and its outputs increases confidence in the system and enables organizational practices and governing structures for harm reduction.  

  • Explainable and interpretable – The representation of the mechanism underlying AI systems’ operation (explainability), and the meaning of an AI systems’ output (interpretability), can assist those operating and overseeing AI systems. 

  • Privacy-enhanced – Anonymity, confidentiality, and control generally should guide choices for AI system design, development and deployment.

  • Fair with harmful bias managed – NIST has identified three major categories of AI bias to be considered and managed: systemic, computational, statistical, and human-cognitive AI bias.

Part Two details the “core” of the AI RMF, which is structured around four functions—each containing categories and subcategories—designed to “enable dialogue, understanding, and activities to manage AI risks and responsibly develop trustworthy AI system.” The four core functions are summarized as follows:

  • Govern – Cultivating and implementing a culture of risk management and outlining processes and organizational schemes to identify and manage risk, as well as understanding, managing, and documenting legal and regulatory requirements involving the AI system.

  • Map – Understanding and documenting the intended purposes and impacts of the AI system, as well as the specific tasks and methods used to implement the AI system.

  • Measure – Evaluating the AI system and demonstrating it to be valid, reliable, and safe. 

  • Manage – Determining whether the AI system achieves its intended purpose, determining whether it should proceed, and ensuring that mechanisms are in place to sustain the value of the AI system.

Part Two also suggests preparing “AI RMF Profiles,” describing implementation of core functions:

  • Use case profiles – Applying core functions to a specific use case, such as an “AI RMF hiring profile” or an “AI RMF fair housing profile.”

  • Temporal profiles – Comparing the current state of an AI risk management activity to a desired target state, revealing gaps to be addressed and management objectives.

  • Cross-sectoral profiles – Covering risks that can be used across different use cases.

Although the AI RMF does not include model templates, organizations should consider preparing AI RMF Profiles to streamline the process of operationalizing and documenting compliance with AI RMF guidance.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins