DOJ Announces Stiffer Penalties for Crimes Committed with the Use of AI
Client Alert | 3 min read | 02.15.24
On February 14, 2024, U.S. Department of Justice (“DOJ”) Deputy Attorney General Lisa Monaco (“DAG”), the second in command at the U.S. Department of Justice, announced to an audience at Oxford University a key development in how the DOJ and its prosecutors plan to address the dangers posed by AI technology. DAG Monaco likened the use of AI in the commission of a crime to the use of a weapon, calling it a “sword,” and characterizing its misuse as “dangerous.” She stated, “Like a firearm, AI can also enhance the danger of a crime.”
Because she characterized AI as a “sharp[] blade” that can be wielded by criminals who would use it to commit crimes ranging from election fraud to cyber warfare, DAG Monaco announced that DOJ prosecutors may now seek sentencing enhancements for crimes committed using AI technology. Federal prosecutors have long been required to calculate proposed sentences by consulting the United States Sentencing Commission Guidelines Manual (“USSG”) as the first step in recommending a sentence for convicted criminals. The USSG contains certain enhancements that can be added to a base offense level for each federal crime. DAG Monaco referenced enhanced penalties that can be sought for the use of a gun in the commission of a crime when explaining that prosecutors may now seek similar enhancements when AI is used to commit a crime.
While there currently do not exist any enhancements in the USSG specifically referring to the use of AI, prosecutors currently could potentially seek an enhancement using USSG § 2B1.1(b)(10)(c) for use of “sophisticated means.” In addition, prosecutors could also recommend that a court impose a more severe sentence for an AI-using defendant who also:
- “may have misused special training or education to facilitate criminal activity,” USSG § 5H1.2; or
- may have used a “special skill” that is not possessed by members of the general public, USSG § 3B1.3.
DAG Monaco also stated that if existing advisory sentencing enhancements are deemed inadequate to address the harms caused by AI, the DOJ is committed to “seek reforms to those enhancements to close that gap.”
In addition to stiffer penalties sought by prosecutors for crimes committed with the misuse of AI, DAG Monaco also referenced other initiatives of the DOJ in accordance with President Biden’s Executive Order on Safe, Secure, and Trustworthy AI announced on October 2023.
Domestically, the DOJ is partnering with other federal agencies to create guidance and controls regarding the use of AI in the U.S., to ensure the use of AI does not threaten the safety or legal rights of U.S. residents. On the international front, DAG Monaco highlighted the Hiroshima AI process, an international initiative launched in May 2023 for the purpose of discussing the opportunities and risks of AI technology. The Hiroshima AI process issued its “Comprehensive Policy Framework,” the “first international framework with guiding principles and a code of conduct designed to promoting ‘safe, secure and trustworthy advanced AI systems.’” Additionally, DAG Monaco noted that AI technology will be a top priority of the Disruptive Technology Strike Force, which was launched in 2023 to use export control laws to ensure international adversaries are not able to misappropriate cutting-edge American technology.
Finally, DAG Monaco announced that in January 2024, the DOJ had appointed its first “Chief AI Officer,” who will lead an initiative referred to as “Justice AI” to solicit opinions from within the DOJ, foreign counterparts, and private experts on the responsible and ethical uses of AI and how to guard against the risks associated with the technology. The Justice AI initiative will culminate with a report to President Biden at the end of 2024.
Insights
Client Alert | 3 min read | 12.13.24
New FTC Telemarketing Sales Rule Amendments
The Federal Trade Commission (“FTC”) recently announced that it approved final amendments to its Telemarketing Sales Rule (“TSR”), broadening the rule’s coverage to inbound calls for technical support (“Tech Support”) services. For example, if a Tech Support company presents a pop-up alert (such as one that claims consumers’ computers or other devices are infected with malware or other problems) or uses a direct mail solicitation to induce consumers to call about Tech Support services, that conduct would violate the amended TSR.
Client Alert | 3 min read | 12.10.24
Fast Lane to the Future: FCC Greenlights Smarter, Safer Cars
Client Alert | 6 min read | 12.09.24
Eleven States Sue Asset Managers Alleging ESG Conspiracy to Restrict Coal Production
Client Alert | 3 min read | 12.09.24
New York Department of Labor Issues Guidance Regarding Paid Prenatal Leave, Taking Effect January 1