Natural Intelligence: NIST Releases Draft Guidelines for Government Contractor Artificial Intelligence Disclosures
Client Alert | 3 min read | 08.28.24
On August 21, 2024, the National Institute of Standards and Technology (NIST) released the Second Public Draft of Digital Identity Guidelines (hereinafter, “Draft Guidelines”) for final review. The Draft Guidelines introduce potentially notable requirements for government contractors using artificial intelligence (AI) systems. Among the most significant draft requirements are those related to the disclosure and transparency of AI and machine learning (ML). By doing so, NIST underscores its commitment to fostering secure, trustworthy, and transparent AI, while also addressing broader implications of bias and accountability. For government contractors, the Draft Guidelines are not just a set of recommendations but a blueprint for future AI standards and regulations.
In identifying concerns for digital identity risk management, NIST focuses on three main concerns: identity proofing, authentication, and federation level. Each of these “can result in the wrong subject successfully accessing an online service, system, or data.” See Draft Guidelines, Section 3. The Draft Guidelines note that AI and ML are used in identity systems for multiple purposes (from biometrics to chatbots) and that potential applications are extensive, but that AI and ML also introduce distinct risks, such as disparate outcomes, biased outputs, and the exacerbation of existing inequities. See Draft Guidelines, Section 3.8.
As a result, Section 3.8 of the Draft Guidelines has been updated to require that, in any identity system:
- All uses of AI and ML must be documented and communicated to organizations relying on these systems, credential service providers (CSPs), identity providers (IdPs), or verifiers using AI and ML must disclose this to all responsible persons making access decisions based on these systems.
- Organizations using AI and ML must provide information to entities using their technology, including methods and techniques for training models, descriptions of training data sets, frequency of model updates, and testing results.
- Organizations using AI and ML systems must implement the NIST AI Risk Management Framework to evaluate risks and must consult SP1270 for managing bias in AI.
In other words, NIST’s Draft Guidelines update the call for detailed disclosures that explain how AI systems operate, the data they rely on, and the algorithms that drive their decisions. Clear disclosures will help government clients understand how AI systems work, which can advance decision-making in areas where AI decisions have significant consequences, such as healthcare, law enforcement, and public policy. At the same time, accountability and ethical considerations help foster trust with AI-solutions.
As AI continues to revolutionize various industries, its integration into government projects brings opportunities and challenges. NIST’s role in developing and promoting standards that ensure security, privacy, transparency, and reliability with new technology will be crucial in shaping how AI systems are designed, implemented, and disclosed. Government contractors who embrace the Draft Guidelines may be better positioned to lead in this evolving landscape, shaping new requirements and delivering AI solutions aligned to the highest standards.
NIST is seeking public comments on the Draft Guidelines through October 7, 2024. Stakeholders should engage with NIST through public comments now, as well as begin to plan for adherence to these guidelines. Taking steps to weigh in on the Draft Guidelines as well as prepare for implementation should they go into effect, will be essential for anticipating final guidelines and ensuring compliance.
Beginning to reevaluate contract provisions and development of AI governance programs, in line with the Draft Guidelines, is crucial for preparation. Government contractors need to be in a position to seamlessly comply with requirements already placed on government agencies through President Biden’s Executive Order on AI and OMB Guidance that will necessarily be passed down to them.
By navigating the legal landscape, Crowell & Moring LLP can help clients understand the unique legal implications of NIST, assess legal risks associated with AI disclosures, and identify areas where the client may be vulnerable to potential litigation. Crowell can also advise on where the Draft Guidelines intersect with existing statutes and regulations, such as the Federal Acquisition Regulation (FAR) or False Claims Act (FCA), conduct trainings, and help develop new strategies to mitigate risk from a comprehensive legal perspective.
As NIST begins to collect public comments on the Draft Guidelines, Crowell will continue to monitor legal and policy developments regulating the use of artificial intelligence. We are prepared to help clients submit comments and engage with regulators, as well as consider their potential next steps.
Contacts
Insights
Client Alert | 5 min read | 12.12.25
Eleventh Circuit Hears Argument on False Claims Act Qui Tam Constitutionality
On the morning of December 12, 2025, the Eleventh Circuit heard argument in United States ex rel. Zafirov v. Florida Medical Associates, LLC, et al., No. 24-13581 (11th Cir. 2025). This case concerns the constitutionality of the False Claims Act (FCA) qui tam provisions and a groundbreaking September 2024 opinion in which the United States District Court for the Middle District of Florida held that the FCA’s qui tam provisions were unconstitutional under Article II. See United States ex rel. Zafirov v. Fla. Med. Assocs., LLC, 751 F. Supp. 3d 1293 (M.D. Fla. 2024). That decision, penned by District Judge Kathryn Kimball Mizelle, was the first success story for a legal theory that has been gaining steam ever since Justices Thomas, Barrett, and Kavanaugh indicated they would be willing to consider arguments about the constitutionality of the qui tam provisions in U.S. ex rel. Polansky v. Exec. Health Res., 599 U.S. 419 (2023). In her opinion, Judge Mizelle held (1) qui tam relators are officers of the U.S. who must be appointed under the Appointments Clause; and (2) historical practice treating qui tam and similar relators as less than “officers” for constitutional purposes was not enough to save the qui tam provisions from the fundamental Article II infirmity the court identified. That ruling was appealed and, after full briefing, including by the government and a bevy of amici, the litigants stepped up to the plate this morning for oral argument.
Client Alert | 8 min read | 12.11.25
Director Squires Revamps the Workings of the U.S. Patent Office
Client Alert | 8 min read | 12.10.25
Creativity You Can Use: CJEU Clarifies Copyright for Applied Art
Client Alert | 4 min read | 12.10.25
Federal Court Strikes Down Interior Order Suspending Wind Energy Development




