Companies in various industries are increasingly incorporating artificial intelligence (AI) into critical aspects of their businesses, and for good reason, as AI has the potential to confer many substantial benefits: It can help companies identify and create operational efficiencies, reduce costs, streamline job processes, improve customer experiences, optimize business strategies, and increase profitability.
However, widespread adoption of AI also has the potential to create substantial antitrust risks for businesses. US federal antitrust enforcement agencies have grown increasingly leery of companies’ use of AI in certain circumstances. Companies and executives should be aware of the antitrust risks posed by incorporating AI into their businesses, and they should take steps to ensure that their use of AI complies with the antitrust laws. In this article, we:
-
Summarize the antitrust framework within which AI is assessed in the United States.
-
Highlight US regulators’ concerns regarding how AI may be used for anticompetitive purposes.
-
Offer practical guidance and best practices for avoiding antitrust allegations and violations when utilizing AI.
US FEDERAL ANTITRUST LAWS ARE APPLICABLE TO AI
The federal antitrust laws govern companies’ interactions with each other and with consumers, including their use of AI. The purpose of the federal antitrust laws is to promote a competitive marketplace and protect consumers and workers from business practices that are likely to reduce competition and lead to higher prices, reduced quality or levels of service, less innovation, increased barriers to entry, or lower wages. These practices are often grouped into two categories:
-
Agreements between or among market participants that reduce or facilitate a reduction in competition, such as agreements to fix prices, allocate customers or markets, rig bids, restrict output, or exchange competitively sensitive information. As applied to companies’ use of AI, antitrust regulators have voiced concerns about, and private plaintiffs have filed claims for damages based on allegations that, competitors’ common adoption of AI-powered pricing algorithms that utilize commonly shared industry data could constitute an anticompetitive agreement to set artificially inflated prices.
-
Single firm conduct, which includes actual and attempted monopolization and monopsonization through exclusionary conduct. For example, antitrust regulators have voiced concerns that an incumbent firm could stifle innovation in AI by bundling its AI products with proprietary software that gives discriminatory treatment to itself and its preferred business partners over new entrants, which could constitute a form of exclusionary conduct in violation of the antitrust laws.
Violations of the antitrust laws can result in serious consequences for companies and individuals. Such consequences include:
-
Civil enforcement actions by federal antitrust enforcement agencies, which can result in civil monetary penalties as well as broad-ranging injunctions governing future conduct.
-
Private civil actions by individuals or businesses injured by the violation. Such lawsuits can be extremely costly to defend, both in terms of monetary costs and lost time of officers and employees, and can result in treble damages (three times the losses suffered by the complaining party) and substantial attorneys’ fees.
-
In certain scenarios, criminal prosecution under felony charges for both the corporation and culpable individuals (e.g., internal management, employees, or third parties). Corporations found guilty of criminal violations of the antitrust laws face significant fines (up to US$100 million and even more in certain circumstances), while individuals may be subject to imprisonment (up to 10 years) and significant fines (up to US$1 million).
US FEDERAL ANTITRUST ENFORCEMENT AGENCIES HAVE PUBLICLY EXPRESSED CONCERNS ABOUT AI
The Federal Trade Commission (FTC) and the Department of Justice (DOJ), the two agencies responsible for enforcing the federal antitrust laws, have issued numerous public statements this year addressing the anticompetitive potential of AI.
In February 2023, the DOJ’s Principal Deputy Assistant Attorney General, Doha Mekki, discussed the antitrust risks associated with companies’ use of AI to aggregate data and collusively make decisions affecting pricing and output. Mekki noted that “several studies have shown that [] algorithms can lead to tacit or express collusion in the marketplace, potentially resulting in higher prices, or at a minimum, a softening of competition”1 and explained that, while the DOJ is committed to ensuring that businesses can use AI to innovate and compete, the DOJ will also take action to prevent AI from being used to harm competition.
The FTC has similarly cautioned against potential antitrust risks associated with AI. FTC Chair Lina Khan warned that state and federal enforcers “need to be vigilant early” as AI develops to ensure businesses are complying with existing laws:
“[T]he A.I. tools that firms use to set prices for everything from laundry detergent to bowling lane reservations can facilitate collusive behavior that unfairly inflates prices . . . The FTC is well equipped with legal jurisdiction to handle the issues brought to the fore by the rapidly developing A.I. sector, including collusion . . . and unfair methods of competition. . . . Enforcers have the dual responsibility of watching out for the dangers posed by new A.I. technologies while promoting the fair competition needed to ensure the market for these technologies develops lawfully.”
In June 2023, the FTC published an article on its Technology Blog discussing various potential single-firm competition concerns raised by generative AI. Because generative AI depends on a set of necessary inputs (including data, talent, and computational resources), control over one or more of those inputs may create unlawful barriers to entry, slow innovation, or create opportunities for unlawful product bundling, tying, and exclusive dealing. The FTC also expressed concerns related to the impact of open-source innovation, where firms can use “open first, closed later” tactics in ways that undermine long-term competition by initially using open source to draw business, establish steady streams of data, and accrue scale advantages but later close off their ecosystem to lock in customers and lock out competition.
As competition issues surrounding the use of AI continue to develop, both the FTC and the DOJ say they will support a marketplace where businesses can compete and continue to innovate, but they caution companies that the agencies will use their full range of tools to identify and address circumstances in which AI can be used to harm competition. In keeping with that goal, both the DOJ and FTC announced earlier this year at the American Bar Association Antitrust Law Spring Meeting that they are taking steps to address these issues, including (1) hiring data scientists, computer scientists, and economists to help them better understand AI and detect anticompetitive conduct; (2) conducting outreach to industry experts and technologists to learn more about how algorithms work; and (3) developing new guidance on the antitrust risks associated with AI.
PRACTICAL GUIDANCE
AI is quickly evolving, and enforcement of the antitrust laws will likely only increase as AI becomes even more integrated into companies’ day-to-day operations. Companies using AI should consider the following measures to help identify, manage, and reduce antitrust risk:
-
Conduct an antitrust risk assessment of the company’s use of AI. Relevant questions include:
-
Does the AI make competitive decisions for the company, particularly related to pricing
-
Does the AI rely on competitor data to inform decisions, particularly competitively sensitive data, such as pricing and production or supply volumes?
-
What is the size of the dataset that feeds into the AI tool?
-
Is the AI the same or similar to AI adopted by others in the industry?
-
Is the company entirely relying on the AI to make decisions or is human agency still part of the decision-making process?
-
-
Exercise caution when disclosing the company’s use of AI to make competitive decisions related to pricing, production, wages, etc., or encouraging others to do the same, as this could be construed as an invitation to collude. Similarly, disclosing to competitors the inputs the AI relies upon to make competitive decisions or encouraging competitors to adopt AI that relies on the same industry data to make competitive decisions can create antitrust risk.
-
Scrutinize the company’s use of aggregated third-party data sources, pricing algorithms, and revenue management tools that rely on AI to ensure the data is appropriately sourced, accurate, and unbiased and that the tools are not at risk of being misused or susceptible to allegations of collusive behavior.
-
Avoid imprecise and inaccurate language in marketing materials that could be misconstrued to imply the company or industry’s use of AI is facilitating or could facilitate collusive behavior or coordinated effects.
-
Update antitrust compliance policies and training so that employees across all relevant departments (e.g., sales, marketing, operations, engineering, product development) are aware of the ways in which the use of AI can create antitrust risks.
-
Integrate antitrust counsel into the company’s development, implementation, and marketing of AI, including if the company links AI applications with other products or if the company intends to shift from an open source to a proprietary ecosystem. Antitrust counsel can assist in assessing the risks of such technology and provide guidance on additional safeguards that may assist in mitigating those risks.