Recently, the National Institute of Standards and Technology (NIST) released its second public draft of Digital Identity Guidelines (Draft Guidelines). The Draft Guidelines focus on online identity verification, but several provisions have implications for government contractors’ cybersecurity programs, as well as contractors’ use of artificial intelligence (AI) and machine learning (ML).
Government Contractor Cybersecurity Requirements
Many government contractors have become familiar with personal identity verification standards through NIST’s 2022 FIPS PUB 201-3, “Standard for Personal Identity Verification (PIV) of Federal Employees and Contractors,” which established standards for contractors’ PIV systems used to access federally controlled facilities and information systems. Among other things, FIPS PUB 201-3 incorporated biometrics, cryptography, and public key infrastructure (PKI) to authenticate users, and it outlined the protection of identity data, infrastructure, and credentials.
Whereas FIPS PUB 201-3 set the foundational standard for PIV credentialing of government contractors, the Draft Guidelines expand upon these requirements by introducing provisions regarding identity proofing, authentication, and management. These additional requirements include:
Expanded Identity Proofing Models. The Draft Guidelines offer a new taxonomy and structure for the requirements at each assurance level based on the means of providing the proofing, whether the means are remote unattended proofing, remote attended proofing (e.g., videoconferencing), onsite unattended (e.g., kiosks), or onsite proofing.
Continuous Evaluation and Monitoring. NIST’s December 2022 Initial Public Draft (IPD) of the guidelines required “continuous improvement” of contractors’ security systems. Building upon this requirement, the Draft Guidelines introduced requirements for continuous evaluation metrics for the identity management systems contractors use. The Draft Guidelines direct organizations to implement a continuous evaluation and improvement program that leverages input from end users interacting with the identity management system and performance metrics for the online service. Under the Draft Guidelines, organizations must document this program, including the metrics collected, the data sources, and the processes in place for taking timely actions based on the continuous improvement process pursuant to the IPD.
Fraud Detection and Mitigation Requirements. The Draft Guidelines add programmatic fraud requirements for credential service providers (CSPs) and government agencies. Additionally, organizations must monitor the evolving threat landscape to stay informed of the latest threats and fraud tactics. Organizations must also regularly assess the effectiveness of current security measures and fraud detection capabilities against the latest threats and fraud tactics.
Syncable Authenticators and Digital Wallets. In April 2024, NIST published interim guidance for syncable authenticators. The Draft Guidelines integrate this guidance and thus allow the use of syncable authenticators and digital wallets (previously described as attribute bundles) as valid mechanisms to store and manage digital credentials. Relatedly, the Draft Guidelines offer user-controlled wallets and attribute bundles, allowing contractors to manage their identity attributes (e.g., digital certificates or credentials) and present them securely to different federal systems.
Risk-Based Authentication. The Draft Guidelines outline risk-based authentication mechanisms, whereby the required authentication level can vary based on the risk of the transaction or system being accessed. This allows government agencies to assign appropriate authentication methods for contractors based on the sensitivity of the information or systems they are accessing.
Privacy, Equity, and Usability Considerations. The Draft Guidelines emphasize privacy, equity, and usability as core requirements for digital identity systems. Under the Guidelines, “[O]nline services must be designed with equity, usability, and flexibility to ensure broad and enduring participation and access to digital devices and services.” This includes ensuring that contractors with disabilities or special needs are provided with identity solutions. The Draft Guidelines’ emphasis on equity complements NIST’s previous statements on bias in AI.
Authentication via Biometrics and Multi-Factor Authentication (MFA). The Draft Guidelines emphasize the use of MFA, including biometrics, as an authentication mechanism for contractors. This complements FIPS PUB 201-3, which already requires biometrics for physical and logical access but enhances the implementation with updated authentication guidelines.
AI and ML in Identity Systems
The Draft Guidelines recognize that identity solutions have used and will continue to use AI and ML for multiple purposes, such as improving the performance of biometric matching systems, documenting authentication, detecting fraud, and even assisting users (e.g., chatbots). However, these uses introduce distinct risks and potential issues. The Draft Guidelines focus specifically on the risks of disparate outcomes, biased outputs, and the exacerbating existing inequities and access issues. To ameliorate these risks, the Draft Guidelines set forth the following requirements, which apply to all uses of AI and ML regardless of how they are used in identity systems:
- All uses of AI and ML must be documented and communicated to organizations that rely on these systems. The use of integrated technologies that leverage AI and ML by CSPs, identity providers (IdPs), or verifiers must be disclosed to all relying parties that make access decisions based on information from these systems.
- All organizations using AI and ML must provide information to any entities that use their technology on the methods and techniques used for training their models, a description of the data sets used in training, information on the frequency of model updates, and the results of all testing completed on their algorithms.
- All organizations using AI and ML systems (or which rely on services that use these systems) must implement the NIST AI Risk Management Framework to evaluate the risks that the use of AI and ML may introduce. Further, all organizations that use AI and ML must consult SP1270, “Towards a Standard for Managing Bias in Artificial Intelligence.”
NIST is seeking public comments on the Draft Guidelines through October 7, 2024. Moreover, given that the upcoming Cybersecurity Maturity Model Certification (CMMC) 2.0 and current DFARS either specifically incorporate NIST standards or look to NIST for guidance, government contractors – especially defense contractors – should start formulating robust cybersecurity and AI governance plans to meet these guidelines. Taking steps to weigh in on the Draft Guidelines, as well as to prepare for implementation should they go into effect, will be essential for anticipating the final guidelines and ensuring compliance with them. Between President Biden’s Executive Order on AI, DFARS, CMMC 2.0, and the various NIST standards, government contractors face a legally complicated cybersecurity/AI landscape. Contractors and would-be contractors will need to understand this landscape, which will require specialized legal and technical expertise. By assessing AI risks, identifying potential vulnerabilities, and formulating a governance plan to manage all of the above, contractors and bidders can develop a winning contract bid and performance strategy.