1. Home
  2. |Insights
  3. |Antitrust: Artificial Intelligence Moves Into the Realm of Antitrust Litigation

Antitrust: Artificial Intelligence Moves Into the Realm of Antitrust Litigation

Publication | 01.10.24

Throughout history, businesses have used new technologies to make themselves more efficient and competitive. Some of these technologies—operating software, internet search, social media—have led to antitrust litigation in varying contexts.

The same fate is likely for artificial intelligence, says Sima Namiri-Kalantari, a partner in Crowell & Moring’s Antitrust and Competition Group. “AI, particularly generative AI, is exposing its providers and users to potential antitrust violations,” she says. “There are a lot of questions about AI in an antitrust context and no bright-line answers. Litigation is just getting started, and the courts will have to sort it all out.”

Price fixing: The most usual suspect

Generative AI uses generative models to produce new content, including text, images, or other media. The models learn the patterns and structure of their data inputs and then generate new data that has similar characteristics.

Perhaps the most usual suspect for AI antitrust enforcement is price fixing. Using generative AI software, companies could potentially be on the hook by exploiting the database that collects their data to set prices in their market.

That’s precisely the conduct alleged in several class action suits in district courts across the United States against AI software companies and their users in a number of industries.

For example, class actions have been filed against hotel operators in Las Vegas and Atlantic City concerning their use of algorithms that generate room-specific pricing recommendations. A district court judge in Nevada recently dismissed one of the complaints (albeit with leave to amend) because the plaintiffs failed to allege an agreement.

Namiri-Kalantari notes that some companies’ use of generative AI has prompted antitrust investigations by the Justice Department and the Federal Trade Commission. “Between the class action lawsuits and the government investigations,” she says, “there will be more clarity about how AI may create antitrust liability and who could be on the hook for violations. The antitrust bar is watching closely.”

The class actions underscore how plaintiffs may accuse businesses of using AI as a tool for collusion. But collusion isn’t always overt, says Namiri-Kalantari. Programmers and users of AI might be accused of colluding where they allow their data to be used in a pooled database encompassing many users—even if all data is anonymous. The key unresolved question is whether use of the database by multiple parties constitutes information sharing or an agreement among the parties to collude in violation of antitrust laws.

AI as a tool for monopolization

Individual parties also could potentially use generative AI to create or perpetuate a monopoly in violation of antitrust laws: It could allow businesses to engage in hyper-targeted predatory or exclusionary conduct to obtain or maintain a monopoly. For example, an AI tool could use data to target less brand-loyal customers with predatory pricing offers or rebates.

While the manner in which businesses may use AI in an exclusionary or predatory manner remains to be seen, it’s top of mind for enforcement officials.

Between the class action lawsuits and the government investigations, there will be more clarity about how AI may create antitrust liability and who could be on the hook for violations.

— Sima Namiri-Kalantari

Be proactive and preventive

AI-related antitrust litigation is likely to grow. The volume of cases is rising, driven by a powerful mix of few legal precedents, a generative AI market that’s exploding, and increasing scrutiny for violations by federal authorities and the plaintiffs’ bar.

Companies that use AI should implement proactive and preventive steps to minimize antitrust exposure. The key is to know exactly what AI is being used for, who has access to your data, and to understand how the AI works. Prevention becomes much tougher without this fundamental knowledge.

Namiri-Kalantari urges companies to take a number of measures. The first seems self-evident, yet many companies don’t do it: Use all available tools reasonably at your disposal to comply with antitrust laws. More specifically, companies should closely review contracts for AI tools to determine potential exposure and revise them accordingly, and thoroughly cover AI in annual antitrust compliance training for employees.

Insights

Publication | May 25-27, 2008

“ISI mitigation using bit-edge equalization in high-speed backplane data transmission,” in IEEE International Conference on Communications, Circuits and Systems (ICCCAS 2008), pp. 589 - 593.