Tennessee may be on the verge of enacting one of the most punitive artificial intelligence laws in the country. Under this proposal, the state would transform routine design features of modern chatbots into Class A felonies.
On December 18, 2025, Tennessee State Senator Becky Massey introduced SB 1493, which imposes Class A felony liability for developers who train AI models that engage in certain prohibited conduct. Some of the targeted conduct is uncontroversial, including AI systems that encourage suicide or homicide.
But the bill goes much further. The bill would criminalize core, mainstream features of virtually every major AI chatbot currently in use.
Section 39-17-2002(8) of the bill makes it a Class A felony for a developer to “knowingly train artificial intelligence to . . . [s]imulate a human being, including in appearance, voice, or other mannerisms.” That language describes the fundamental design of modern conversational AI chatbots.
Anyone who has interacted with ChatGPT, Gemini, Claude, or similar AI systems understands that AI chatbots are explicitly designed to simulate human conversation. The simulation is not incidental to the LLM-based product. It is the product.
That section of the bill also treats the deployment of voice-based AI chatbots as criminal conduct. Yet voice modes have been standard, clearly disclosed features of leading AI systems for more than two years and are used daily by millions of people.
Under this framework, developers could face felony exposure merely for training AI models capable of ordinary chat or voice interaction. It creates immediate risk for foundation-model developers whose systems are routinely adapted by downstream actors. Once an AI model enters a commercial ecosystem, its developers rarely control how it is fine-tuned or deployed. Imposing felony exposure at the training stage, therefore, represents a significant departure from traditional regulatory frameworks.
In practical terms, the law would require companies to cease training such models altogether and halt their deployment in the marketplace, forcing companies to withdraw widely used products or operate under the threat of severe criminal penalties.
The bill compounds this problem by also criminalizing the training of AI systems that “act as a companion to an individual” and that “provide emotional support.” These terms are inherently subjective and overbroad. Modern LLMs are designed to communicate in ways users perceive as empathetic.
In fact, seeking advice, guidance, or conversational support is among the most common uses of AI chatbots. Surveys indicate that approximately 72 percent of teenagers have used AI companions at least once, and more than half report using them multiple times per month. These are not fringe applications. They are mainstream consumer uses.
Rather than focusing on demonstrable harms or meaningful disclosure obligations, the bill criminalizes entire categories of lawful, socially accepted behavior and effectively prohibits the public from accessing tools they already use responsibly.
The proposal also reflects a fundamental misunderstanding of how AI systems are trained and how they are used by the public. Developers cannot meaningfully train models to be conversational while selectively excluding abstract use cases such as companionship or advice seeking. Training models to communicate naturally is precisely what the market demands. Attempting to outlaw that functionality would not merely affect niche applications. It would undermine the entire category of conversational AI.
To be clear, there are real and serious concerns at the margins. There are high-profile cases in which minors have allegedly engaged in extended conversations with AI chatbots and were encouraged to engage in self-harm or suicide. At least six such cases have been filed. In response, states such as California and New York have already taken targeted legislative action to address these risks, particularly as they relate to children and disclosure obligations.
But sweeping proposals like SB 1493, which criminalize the training of an entire category of AI companion chatbots and ordinary conversational interaction, completely miss the mark. Rather than addressing discrete, demonstrable harms, they outlaw the core functionality of widely used tools.
The practical reach of the statute would extend far beyond Tennessee’s borders. AI models are trained and then deployed nationally, often globally. A single state criminalizing foundational design choices could force developers to remove their AI products from entire markets or lobotomize products for the entire country rather than a single jurisdiction.
This Tennessee proposal emerges amid a broader national trend. In the absence of comprehensive federal regulation, states have increasingly moved to regulate AI themselves. More than 1,000 AI-related bills are currently pending across the country. This rapid legislative activity reflects both the urgency policymakers feel and the difficulty of regulating a technology whose capabilities continue to evolve faster than traditional statutory frameworks.
But SB1493 should serve as a cautionary example of how not to regulate AI. When ordinary product design choices carry felony exposure, the law is no longer guiding innovation. It is deterring it.
/>i
