Current discourse surrounding AI is increasingly dominated by a sense of urgency, a feeling that we stand at a dangerous precipice requiring immediate, decisive regulatory action. This reflex to govern a new technology, still in its dynamic infancy, has led lawmakers across the country to propose “sea walls” of hasty regulations, laws, and guidance that often create more confusion than clarity. However, much like how sea walls block the natural flow of water, these regulatory barriers risk blocking the natural pace and benefits of AI innovation.
We want to prepare a more sober accounting of our actual knowledge versus our speculative fears of AI in an effort to drive practical guidance in the AI regulatory space. Ultimately, we believe that this accounting supports a harmonized federal approach to AI regulation rather than the creation of a state-by-state patchwork.
What We Know: The Current Landscape
First, we know that fears surrounding AI, while sometimes valid, are often amplified by misinformation and speculative anxieties. For example, the narrative of an uncontrollable superintelligence or ubiquitous, job-destroying robots, while compelling in fiction, overshadows the more nuanced realities of AI development. While AI has caused and will continue to result in labor market shifts, more focus should be spent on training workers to adapt to new economic realities. AI presents the latest instance of technologically-induced job displacement—a challenge we’ve encountered before and have the capacity to tackle if lawmakers recognize the importance of this issue and embrace evidence-based policies. However, centering AI policy on doomsday estimates of immediate waves of widespread joblessness distracts from larger, long-term policy considerations and potential economic retooling.
The same is true of discourse around AI-generated mis- and disinformation. AI has helped bad actors spread false content at rapid speeds and at a low cost. However, empirical evidence suggests that these new capabilities have not poisoned our information ecosystem. This atmosphere of heightened fear creates a fertile ground for reactive policymaking, pushing for swift action before a thorough understanding of the technology's trajectory or its specific societal impacts is achieved.
Second, we know that numerous stakeholders are actively pushing for the rapid adoption of model AI legislation, often at the state level, with mixed motives. While some proponents are driven by genuine safety concerns, others may see an opportunity to shape the regulatory landscape to their advantage, potentially creating barriers to entry for smaller innovators or entrenching the positions of established players. This rush can lead to a widespread adoption of model laws that will trap states in the technological past. One thing we know all too well - once a law is on the books, it will become very hard to amend or remove, thanks to special interests defending the status quo and legislators seeking to defend their reputations.
One such example comes in the “likeness legislation” space. Here, groups with deep interests in specific aspects of the entertainment sector have worked to create increasingly restrictive rules around the use of one’s likeness in the media. Although addressing concerns around serious threats like non-consensual pornographic deepfakes is critical, the consequence of passing legislation that could restrict First Amendment expression and speech could be disastrous. In one example, California recently passed legislation that would allow record labels to obtain, assign, and enforce one’s right to their likeness. The recently signed “TAKE IT DOWN ACT,” which generally criminalizes nonconsensual pornographic deepfakes at the federal level, has also come under criticism as potentially being used to stifle political and dissident speech
Third, we know that many jurisdictions, particularly at the state level, currently lack the deep institutional capacity to design, implement, and enforce complex technology-related laws in a fair, predictable, and adaptable manner. The history of technology regulation is replete with examples of well-intentioned rules leading to unintended consequences. Several economic studies on tort liability provide cautionary tales. For instance, it’s been empirically documented that changes in liability rules can have significant, and not always straightforward, dynamic effects on innovation incentives. Eric Helland and his colleagues demonstrated that increasing liability on an upstream producer, intending to enhance consumer safety, can paradoxically decrease the downstream distributor's caution, potentially leading to increased sales of a risky product if liability is shared. Another study by Alberto Galasso and Hong Luo found that a surge in liability risk for polymer suppliers had a “large and negative impact on downstream innovation in medical implants,” showing how regulatory pressures can “percolate throughout a vertical chain and may have a significant chilling effect on downstream innovation.” If states struggle with the nuances of product liability, a far more established field, their ability to effectively regulate the rapidly evolving and deeply technical domain of AI without causing similar chilling effects is a serious concern.
Fourth, we know that technological progress “builds upon itself.” Innovation is an interconnected web; advancements in one area often unlock progress in others. AI is not just another technology; it is a foundational platform, an invention that can propel progress in related domains. Prematurely restricting AI's development through broad or poorly conceived regulations could, therefore, have cascading negative effects, slowing innovation not just in AI itself but across countless dependent sectors.
Fifth, we know that defining "Artificial Intelligence" and related technologies for regulatory purposes is itself a monumental challenge. Is AI a sophisticated statistical model? Is it a complex rules-based system? Overly broad or static definitions risk capturing technologies far beyond the intended scope or becoming rapidly obsolete. This definitional ambiguity makes crafting precise, effective, and future-proof regulation exceedingly difficult.
Sixth, we know that AI development is a global phenomenon, and innovation is not confined by national borders. While one nation might impose prematurely restrictive regulations, others will continue to advance. This creates a real risk of an uneven playing field, shifting the locus of AI leadership and its associated economic and strategic benefits. Purely domestic regulatory strategies that fail to account for this global interconnectedness may prove ineffective or even counterproductive to national interests.
Seventh, we know that many existing legal frameworks—from intellectual property law and data privacy regulations to product liability and contract law—already apply to AI systems and their outputs, albeit with new questions and complexities. Before layering entirely new AI-specific regulatory regimes, a thorough assessment of how current laws can be adapted, clarified, or more robustly enforced is essential. A rush to create bespoke AI laws without this foundational analysis risks creating duplicative, conflicting, or unnecessarily burdensome rules that could hinder rather than help.
What We Don't Know: The Uncharted Territory
Against these knowns, we must weigh the significant unknowns that counsel caution.
First, we don't know how early regulations will inadvertently lock in current technological paradigms. AI is evolving at a breathtaking pace. Rules drafted based on today’s models and capabilities could quickly become obsolete, or worse, actively hinder the development of newer, safer, or more efficient approaches. By favoring incumbent technologies or specific architectures, such regulations could stifle the very experimentation that leads to breakthroughs.
Second, we don't know how our global adversaries will develop and deploy AI. If the United States imposes overly burdensome domestic regulations while others press ahead with fewer constraints, it risks ceding technological leadership. This has implications not only for economic competitiveness but also for national security, as AI becomes increasingly integral to defense and intelligence capabilities.
Third, we don't fully know which AI use cases will prove truly transformative and which will be incremental. We are still in the nascent stages of exploring AI's vast potential. Many of the most profound applications may be unforeseen today. Overly broad or preemptive restrictions on certain types of AI research or application areas—based on current fears rather than demonstrated harm—could prevent us from discovering these transformative uses. Imagine if early aviation fears had led to regulations that stifled the development of jet engines before their potential was understood.
Fourth, we don't know the full spectrum of unintended consequences that premature AI regulations might unleash. Economic research on tort reform underscores that policy interventions can have “dynamic effects on innovation incentives that go beyond their short-term impact.” Just as changes in liability altered the rate of technological change in healthcare, AI regulations could create unforeseen economic distortions, shift research priorities in undesirable ways, or create new avenues for regulatory arbitrage. The complexity of the AI ecosystem means that interventions in one area can have unexpected and potentially detrimental ripple effects elsewhere.
Navigating the Path Forward: Prudence Over Panic
Given these knowns and unknowns, the path forward demands prudence over panic. Instead of rushing to implement top-down regulations at the state level, we should adopt a more agile, evidence-based, and iterative approach led by the federal government. This involves fostering robust research into AI development and deployment, establishing flexible regulatory frameworks that can adapt to technological advancements, and focusing on specific, demonstrable harms rather than speculative risks.
All of the views and opinions expressed in this article are those of the authors and not necessarily those of The National Law Review.