HB Ad Slot
HB Mobile Ad Slot
Artificial Intelligence: NIST Risk Management Framework and Guidance Addressing Bias in AI
Friday, March 25, 2022

As more and more companies are developing and/or utilizing artificial intelligence (AI), it is important to consider risk management and best practices to address issues like bias in AI. The National Institute of Standards and Technology (NIST) recently released a draft of its AI Risk Management Framework (Framework) and guidance to address bias in AI (Guidance). The voluntary Framework addresses risks in the design, development, use, and evaluation of AI systems. The Guidance offers considerations for trustworthy and responsible development and use of AI, notably including suggested governance processes to address bias. 

Who Should Pay Attention?

The Framework and Guidance will be useful for those who design, develop, use, or evaluate AI technologies. The language is drafted to be understandable by a broad audience, including senior executives and those who are not AI professionals. At the same time, NIST includes technical depth, so the Framework and Guidance will be useful to practitioners. The Framework is designed to be scalable to organizations of all sizes, public or private, across various sectors, and to national and international organizations.

Who is NIST and What is This Publication?

NIST is part of the US Department of Commerce and was founded in 1901. Congress directed NIST to develop an AI Risk Management Framework back in 2020. Elham Tabassi, chief of staff of the NIST Information Technology Laboratory and coordinator of the agency’s AI work says, “We have developed this draft with extensive input from the private and public sectors, knowing full well how quickly AI technologies are being developed and put to use and how much there is to be learned about related benefits and risks.” The Framework considers approaches to develop characteristics of trustworthiness, including accuracy, explainability and interpretability, reliability, privacy, robustness, safety, security, and mitigation of unintended and/or harmful uses. 

As an overview, the Framework covers the following points:

  1. Technical characteristics, socio-technical characteristics, guiding principles; 

  2. Governance, including mapping, measuring, and managing risks; and 

  3. Practical guide. 

The Guidance addresses three high-level points:

  1. Describes the stakes and challenge of bias in artificial intelligence and provides examples of how and why it can chip away at public trust;

  2. Identifies three categories of bias in AI — systemic, statistical, and human — and describes how and where they contribute to harms; and

  3. Describes three broad challenges for mitigating bias — datasets, testing and evaluation, and human factors — and introduces preliminary guidance for addressing them.

Why is AI Governance Important?

Governance processes impact nearly every aspect of managing AI. Governance includes administrative procedures and standard operating policies but is also part of organizational processes and cultural competencies that directly impact the individuals involved in training, deploying, and monitoring such systems. Such monitoring systems and recourse channels help end-users to flag incorrect or potentially harmful results and provide accountability. It’s also critical to ensure that written policies and procedures address key roles, responsibilities, and processes at different stages of the AI lifecycle. Clear documentation helps to systematically implement policies and procedures, and standardizes how an organization’s bias management is implemented. 

AI and Bias

Harmful impacts from AI have an effect not just at the individual or organizational level but can quickly ripple to a broader scope. The scale and speed of damage from AI make it a unique risk. NIST highlights that machine learning processes and data used for training AI software are prone to bias, both human and systematic. Bias influences the development and deployment of AI. Systemic biases can come from institutions operating in ways that disadvantage certain groups, such as discriminating against individuals based on their race. Human biases can come from people drawing biased inferences or conclusions from data. When human, systemic, and computational biases combine, they can compound bias effects and result in crippling consequences. To address these issues, the NIST authors make a case for a “socio-technical” approach to address bias in AI. This approach combines sociology and technology by recognizing that AI operates in a larger social context, so efforts to address bias should go beyond technical efforts.

AI and State Privacy Laws 

Upcoming state privacy laws also address AI activities. Starting in 2023, AI, profiling, and other forms of automated decision-making will become regulated under comprehensive privacy laws in California, Virginia, and Colorado, including rights for consumers to opt-out of certain processing of their personal information by AI and similar processes. Organizations should also prepare to provide information in response to access requests for information about the logic involved in automated decision-making processes. 

What’s Next?

NIST will receive public comments on the AI framework through April 29. In addition, NIST is holding a public workshop from March 29-31. NIST’s post seeking comments on its Framework and providing more information about its Guidance can be found here. The Guidance can be found here. NIST is planning a series of public workshops over the next few months aimed at drafting a technical report for addressing AI bias and connecting the Guidance with the Framework. More details on the workshop are coming soon. A second draft of the Framework will be released this summer or fall, including comments received by April 29.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins