JAIC Has More Work To Do in Developing Artificial Intelligence Standards, while DoD Components and Contractors Must Implement Security Controls Around Artificial Intelligence, Says DoD OIG
Client Alert | 1 min read | 07.09.20
On July 1, 2020, the Department of Defense (DoD) Office of Inspector General (OIG) published its audit report. The report assessed the DoD Joint Artificial Intelligence Center’s (JAIC) progress in developing an Artificial Intelligence (AI) governance framework and standards, as well as DoD components’ implementation of security controls to protect AI data and technologies from internal and external cyber threats. DoD OIG concluded that the JAIC must do more and ensure consistency with DoD’s adoption of ethical principles for AI (as we previously reported on here), including the following: (1) include a standard definition of AI and regularly, at least annually, consider updating the definition; (2) develop a security classification guide to ensure the consistent protection of AI data; (3) develop a process to accurately account for AI projects; (4) develop capabilities for sharing data; (5) include standards for legal and privacy considerations; and (6) develop a formal strategy for collaboration between the Military Services and DoD Components on similar AI projects. In addition, the DoD OIG found that four DoD components (Army, Marine Corps, Navy, and Air Force) and two contractors failed to implement security controls to protect data used in AI projects and technologies from threats. The DoD OIG therefore directed these DoD components and contractors to: (1) configure their systems to enforce the use of strong passwords, generate system activity reports, or lock after periods of inactivity; (2) review networks and systems for malicious or unusual activity; (3) scan networks for viruses and vulnerabilities; and (4) implement physical security controls, such as AI data. Following this report, contractors should expect to see a biannual AI portfolio review of all DoD components’ AI projects and guidance on legal and privacy standard operating procedures.
Contacts

Partner, Crowell Global Advisors Senior Director
- Washington, D.C.
- D | +1.202.624.2698
- Washington, D.C. (CGA)
- D | +1 202.624.2500
Insights
Client Alert | 3 min read | 11.21.25
On November 7, 2025, in Thornton v. National Academy of Sciences, No. 25-cv-2155, 2025 WL 3123732 (D.D.C. Nov. 7, 2025), the District Court for the District of Columbia dismissed a False Claims Act (FCA) retaliation complaint on the basis that the plaintiff’s allegations that he was fired after blowing the whistle on purported illegally discriminatory use of federal funding was not sufficient to support his FCA claim. This case appears to be one of the first filed, and subsequently dismissed, following Deputy Attorney General Todd Blanche’s announcement of the creation of the Civil Rights Fraud Initiative on May 19, 2025, which “strongly encourages” private individuals to file lawsuits under the FCA relating to purportedly discriminatory and illegal use of federal funding for diversity, equity, and inclusion (DEI) initiatives in violation of Executive Order 14173, Ending Illegal Discrimination and Restoring Merit-Based Opportunity (Jan. 21, 2025). In this case, the court dismissed the FCA retaliation claim and rejected the argument that an organization could violate the FCA merely by “engaging in discriminatory conduct while conducting a federally funded study.” The analysis in Thornton could be a sign of how forthcoming arguments of retaliation based on reporting allegedly fraudulent DEI activity will be analyzed in the future.
Client Alert | 3 min read | 11.20.25
Client Alert | 3 min read | 11.20.25
Client Alert | 6 min read | 11.19.25



