AI is now integral to modern life, curating our news, driving autonomous vehicles, and aiding in medical diagnoses. While these advancements bring convenience and efficiency, they also present legal and ethical challenges that existing laws haven't anticipated. Deepfake technology exemplifies this issue: hyper-realistic fake videos of individuals doing or saying things they never did. For instance, scammers have used AI to create deepfake videos featuring popular TV doctors to promote counterfeit health products on social media platforms. This raises a new, but pivotal question: Can existing laws address AI-related issues, or is there a need for new legislation to address AI's complexities?
1. Existing Laws: Stretching to Fit AI
One might wonder whether our current legal frameworks are robust enough to manage the challenges posed by AI. Balancing the benefits of AI with its potential risks requires a critical examination of our legal frameworks. The Federal Trade Commission Act, for instance, empowers the Federal Trade Commission (FTC) to combat deceptive advertising and unfair business practices. This authority doesn't exclude AI developers and companies that incorporate AI into their products and services. AI companies are held to the same standards of truthfulness and transparency as traditional businesses. Misrepresenting an AI product's capabilities can, and does, lead to FTC enforcement actions.
On September 25, 2024, as part of its Operation AI Comply, the FTC announced five cases exposing AI-related deception. One of these FTC complaints was against DoNotPay, which marketed itself as "the world's first robot lawyer" and an "AI lawyer.” The FTC alleged that DoNotPay's services did not live up to its claims, thereby misleading consumers. The case was eventually settled, and the proposed settlement requires DoNotPay to cease its deceptive practices, pay $193,000, and inform certain subscribers about the case.
Also in September of this year, the Federal Election Commission voted 5-1 to issue a notice clarifying that the existing Federal Election Campaign Act’s prohibition against “fraudulent misrepresentation” applies to AI-generated content and is “technology neutral.” The notice specifically addressed the use of AI to generate misleading campaign ads that appear to be authorized by opponents when they are not, and is explicitly aimed at candidates running in the 2024 election cycle.
2. New Legislation: Addressing AI's Unique Challenges
While existing laws cover certain aspects of AI, there are areas where they are stretched thin, necessitating new legislation. Deepfakes are a good example of when existing laws fall short. Using deepfake technology, a bad actor can create realistic videos depicting individuals in explicit situations without their consent. The problem with existing laws is that they contemplate the publication of real-life, explicit videos without the individual’s consent, so called “revenge porn.”
California Penal Code Section 647(j)(4) criminalizes the intentional distribution of non-consensual explicit images: “Any person who intentionally distributes the image of the intimate body part or parts of another identifiable person, or an image of the person depicted [without consent]...” This language focuses on the distribution of actual images of an identifiable person, implying that the law applies to real photographs or videos. The statute does not explicitly address digitally fabricated or AI-generated content, where the depicted individual's likeness is synthetically created.
California's Assembly Bill No. 602, effective October 2019, provided individuals with a cause of action against creators and distributors of non-consensual sexually explicit material produced using digital or electronic technology. The law redefined “depicted individual” as someone who appears, as a result of digitization, to be engaging in sexually explicit acts they did not actually perform. This legislation allows victims to seek damages and obtain injunctions against those responsible for creating or distributing such deepfake content. As of September 2024, 23 states have passed some form of nonconsensual deepfake law.
3. Case Law: Reshaping the Application of Existing Laws to AI
Large language models (LLMs) have raised complex questions about copyright. LLMs are created through training on vast amounts of data, some of which is copyrighted material. In December 2023, the New York Times filed a lawsuit against OpenAI and Microsoft, alleging that the companies used millions of its articles without permission to train its AI models. Similar lawsuits have been filed by numerous authors. OpenAI contends that ingesting copyrighted works to create LLMs falls under the fair use doctrine, is transformative, and does not substitute for the original works. It is unclear whether these arguments will prevail, but the case will set a precedent when it is decided.
The current copyright framework in the United States protects original works of authorship fixed in a tangible medium, created by human authors. AI-generated works raise questions about who holds the copyright—the programmer, the user, or perhaps no one at all. In March 2023, the U.S. Copyright Office provided this guidance: “When an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship. As a result, that material is not protected by copyright and must be disclaimed in a registration application.”
In October 2024, artist Jason M. Allen appealed the U.S. Copyright Office's denial of copyright registration for his AI-generated artwork, “Théâtre D'opéra Spatial.” The Office had refused registration, stating that works created solely by AI without human authorship are not eligible for copyright protection. Allen's appeal challenges this stance, arguing for recognition of human creativity in the use of AI tools. The case is still pending, leaving open the question whether copyright law will be stretched by the courts or whether Congress will step in with new legislation.
Closing Thoughts
The rapid integration of AI into our daily lives presents immense opportunities alongside significant challenges. While existing legal frameworks like the Federal Trade Commission Act can sometimes be stretched to address AI-related issues—as seen in the FTC's actions against companies like DoNotPay—there are clear instances where these laws fall short. Technologies such as deepfakes pose risks that traditional statutes were not designed to mitigate, leading to severe personal and societal harm.
These gaps highlight the necessity of updating our legal system to keep pace with technological advancements. The evolving landscape of intellectual property law, particularly concerning AI-generated content, challenges our conventional understanding of authorship and ownership. Ongoing legal disputes over copyright protections for AI-created works underscore a pressing need for clarity, whether through judicial precedent or new legislation.
Ultimately, safeguarding society from the potential harms of AI while fostering innovation requires a dynamic legal framework that is both flexible and robust. Collaboration among lawmakers, technologists, legal experts, and the public is crucial to achieving this balance. By proactively addressing the legal challenges posed by AI, we can harness its benefits and mitigate its risks, ensuring that technological progress contributes positively to society.