HB Ad Slot
HB Mobile Ad Slot
Trade Secrets: Now Even Your Dog Knows Them (Thanks, Remote Work & AI!)
Tuesday, August 12, 2025

Mike Tyson once said “[e]veryone has a plan until they get punched in the face.” This quote describes the confidence that organizations may have in their existing trade secret plans, until they encounter some of the evolving complexities of trade secret protection in this era of the combination of remote work and artificial intelligence (AI).

In today’s fast-paced world, where remote work has become the norm and AI has been revolutionizing industries since the launch of Open AI’s ChatGPT in the fall of 2022, safeguarding your trade secrets has never been more critical. As organizations adapt to this new landscape, their trade secrets are threatened by new and more complex potential vulnerabilities, making it essential to have a robust and continuously adaptable strategy in place to address these trade secret protection challenges.

The digitization of trade secrets records is a particular challenge. Digitization has made it easier for employees to access trade secrets in unsecure locations and for partners and former employees to abscond with confidentially shared information unbeknownst to a trade secret owner. The digitization of information, as well as the ease of transferring information, has led some to act as if information cannot be owned. In addition, old ways of securing information are now vulnerable to autonomous AI training that scans vast swaths of digitized data while circumventing traditional secrecy measures. Even user prompts to AI models are being hijacked to disclose and destroy trade secrets. It is mission critical for companies to augment their trade secret recording and storing practices to include updated confidentiality agreements, company policies, employee training, and institution of AI interaction restrictions.

This blog explores these modern challenges to trade secret protection, highlights relevant legal cases that illustrate these risks, and provides a summary of best practices for organizations looking to protect their valuable trade secrets.

THE MODERN TRADE SECRET LANDSCAPE

Before the COVID-19 pandemic, organizations effectively safeguarded their trade secrets within physical office settings. Today, as remote work becomes standard and AI is integrated into our routines, the protection of sensitive information has become increasingly challenging. Critical data can slip through the cracks of home offices and digital platforms. This calls for innovative strategies and proactive measures to protect what truly matters. It is essential to prioritize the security of your valuable information in this new landscape.

The Uniformed Trade Secrets Act (UTSA), which most U.S. states have adopted, provides a federal cause of action for trade secret misappropriation. Before the UTSA, state law primarily governed such claims. Now, a trade secret owner can bring a civil trade secret misappropriation action in federal court if the trade secret is related to a product or service used in interstate or foreign commerce.

The UTSA defines a “trade secret” as any information that derives independent economic value from not being generally known and is subject to reasonable efforts to maintain its secrecy (see 18 U.S.C. § 1839(3)). Examples of trade secrets include formulas, algorithms, processes, designs, customer lists, supplier lists, business plans, budgets, other types of commercially valuable but not publicly known data, and software code. There are also AI-related examples, including AI algorithms, AI training datasets, and AI system architectures.

POTENTIAL RISKS INTRODUCED BY REMOTE WORK

Remote and hybrid work have become the new normal for many companies, which are issuing company laptops or other portable devices to their personnel, allowing work to be done anywhere and at any time. Remote work in some capacity is likely here to stay, and so too are the associated security risks, some of which are highlighted below.

Increased Data Exposure. Accessing an organization’s network from outside the office, whether on company or personal equipment, increases the risks of exposing sensitive data to unintended parties. When employees connect through unsecured Wi-Fi networks (including some home networks) or rely on various cloud services, the chances of data leaks—whether accidental or deliberate—soar. It is essential to remain vigilant and prioritize security to safeguard sensitive information in this increasingly remote work environment. For example, in TileBar v. Glazzio Tiles (E.D.N.Y. 2024), several former Tilebar employees allegedly misappropriated Tilebar’s confidential customer lists and pricing data, after joining its competitor Glazzio Tiles, by accessing, downloading and transferring this information to their personal cloud storage and USB storage devices. The court denied the defendants’ motion to dismiss, allowing the case to proceed, highlighting as part of its decision the steps that TileBar took “to protect the confidentiality of the purported trade secrets, from requiring employees to abide by the confidentiality policies in the Employee Handbook and the Addenda to the Handbook, to using dual-password protection and multi-factor authentication.” Id. at 192, 211. This ruling highlights the need for strong internal safeguards to protect a company’s proprietary and confidential information.

Weakened Supervision and Oversight. Remote work environments present unique challenges for ensuring data securityas they reduce the capability of an organization to monitor its remote personnel, which can increase the risk of undetected data breaches and consequential information theft (i.e., the theft of sensitive data that leads to significant, indirect losses for the victim, beyond the immediate loss of the data itself). Traditional security protocols—like keycard access, locked file cabinets, and direct IT supervision—simply can’t be applied in the same way. This shift underscores the importance of organizations rethinking their security strategies to protect sensitive information in a digital age. The security issues can extend to other types of data misuse. InEpic Sys. Corp. v. Tata Consultancy Servs. Ltd. (W.D. Wis. 2017, affirmed by the 7th Cir. in 2020)Epic sued Tata for trade secret misappropriation after Tata employees allegedly accessed Epic’s confidential information using stolen credentials, and downloaded thousands of proprietary documents. This case demonstrates the failure to monitor user behavior and enforce remote access limitations, especially involving outsourced or offshore teams. See also Waymo LLC v. Uber Techs., Inc. (N.D. Cal. 2017), where Levandowski, a former Waymo employee, used his company credentials and resources to improperly download and steal “‘9.7 GBs of sensitive, secret, and valuable internal Waymo information,’ including ‘confidential information regarding Waymo’s LiDAR systems and other technology,’” before resigning and founding Ottomotto and Otto Trucking, which was subsequently acquired by Uber for $680 million.

Dispersed Collaboration Tools. Collaboration platforms like Slack, Zoom, Dropbox, Airdrop, Google Drive, and Microsoft Teams have truly transformed the way we communicate and work together. However, this evolution comes with its own set of challenges, particularly when it comes to data security. When files are shared with the wrong peoplesettings are misconfigured, or the wrong people are included in group communications, the risk of data leakage escalates significantly. It is crucial to use these tools in accordance with an employer’s formal and informal policies related to the protection of its confidential information, to protect sensitive information while enjoying the benefits of seamless collaborationIn DraftKings Inc. v. Hermalyn (D. Mass., affirmed by the 1st Cir. In 2024), DraftKings sued former executive Hermalyn for allegedly misappropriating DraftKings trade secrets by transferring over eighteen proprietary DraftKings documents to his personal devices, using unauthorized devices and programs such as Dropbox, Airdrop and Slack, and violating non-compete and non-solicitation agreements with DraftKings.

Departing Employees and “Digital Briefcases.” The sense of flexibility and comfort that remote work provides also can create a sense of detachment. This detachment can lead to employees becoming less vigilant about their trade secret obligations. In USA v. Umetsu (D. Haw. 2022), Umetsu, an information technology professional, pled guilty to sabotaging his former employer’s domain registrar account after his employment ended in 2019. Using his old credentials, he altered DNS settings, rerouting the company’s website and email to unauthorized servers and locking them out—disabling communications for days. Umetsu then prolonged the outage for several days by taking a variety of steps to keep the former employer’s IT staff locked out of the website. The company should have invalidated Umetsu’s user credentials immediately after he was fired by his ex-employer. Indeed, in DM Trans, LLC v. Scott (7th Cir. 2022), the court found that an employer was not entitled to injunctive relief where that employer neither requested nor took steps to ensure that employees deleted software program data from personal devices.

POTENTIAL RISKS INTRODUCED BY AI

Due to their ubiquitous nature, AI systems create potential risks for both remote and in-office personnel. Traditionally, trade secrets have been safeguarded by their very secrecy and reasonable efforts taken by their owners to maintain that secrecy. However, AI’s unique characteristics are blurring these lines, forcing companies and courts to redefine what constitutes a protectable secret and what secrecy measures are reasonable. Moreover, the autonomous and evolving nature of AI systems further complicates matters. Unlike static software, AI models learn and adapt, generating new knowledge that was not explicitly programmed. This raises the critical question of whether the output of an AI system, or even its internal “thought process” and parameters, can be considered a trade secret. Courts are grappling with these novel issues.

Data Aggregation and Model Training. AI tools, especially large language models (LLMs), are transforming how we process information. These powerful tools train on extensive datasets. Each training interaction skims the surface of a vast ocean of data. Misusing internal datasets—whether accidentally, from misunderstanding, or deliberately—poses a significant risk of trade secret disclosure.In Intercept Media, Inc. v. OpenAI, Inc. (S.D.N.Y 2025), the court noted that the training process embedded proprietary insights into the AI model, demonstrating that misappropriation during AI training can permanently encode proprietary information, making reversal difficult. Thus, once a misappropriated trade secret is used to train a model, the damage is irreversible due to model memory, complicating injunctive relief or restitution. Similarly, in Financial Information Technologies, LLC v. iControl Systems, USA, LLC (11th Cir. 2021), the court recognized that the misappropriation of trade secrets in software systems can result in permanent encoding of proprietary methods and processes, making it difficult to disentangle the misappropriated information. The jury awarded significant damages, reflecting the harm caused by the irreversible integration of trade secrets into the defendant’s products.

As shown in these cases, one of the most significant challenges stems from the very nature of AI itself. Machine learning models, particularly generative AI, are trained on vast datasets, often ingesting immense quantities of information, including potentially sensitive proprietary data. This creates a considerable risk of unintentional disclosure during development, testing, or deployment. For instance, an AI system trained on confidential customer data could inadvertently reveal patterns or insights that, if reverse-engineered or analyzed by a competitor, could compromise a trade secret. The sheer volume and complexity of data involved make it difficult to trace the origin of every piece of information within an AI model, posing a significant hurdle in proving misappropriation.

Prompt Engineering. AI systems, particularly generative AI, can be tricked into revealing proprietary information through carefully crafted prompts. In OpenEvidence, Inc. v. Pathway Medical, Inc. (D. Mass. 2025), a central issue was whether “prompt injection” attacks, designed to extract “system prompts” from a generative AI model, constitute improper means of acquiring trade secrets. This case underscores the difficulty in defining what exactly constitutes a trade secret within a dynamic AI system and whether interacting with an AI in a novel way can be deemed misappropriation. Similarly, Neural Magic, Inc. v. Meta PlatformsIncet al. (D. Mass 2020), involved allegations of stolen algorithms enhancing computer efficiency and enabling advanced machine learning, showcasing the high stakes involved in protecting AI-related intellectual property in competitive environments.

Reverse Engineering. AI has supercharged the ability of malicious actors to reverse-engineer proprietary software or systems. Tools like OpenAI’s Codex and DeepMind’s AlphaCode can analyze existing proprietary source code, and generate functionally equivalent alternatives. When combined with data from APIs or compiled binaries, these tools facilitate the reconstruction of proprietary systems, blurring the lines between legitimate analysis and misappropriation. The OpenEvidence, Inc. v. Pathway Medical, Inc. case, discussed above, illustrates how courts are grappling with the intersection of AI reverse engineering.

“Readily Ascertainable” Standard. Another area of concern is the “readily ascertainable” standard for trade secrets. The Defend Trade Secret Act (DTSA) and state UTSA require that information not be “generally known or readily ascertainable” to qualify as a trade secret. With the proliferation of AI tools capable of analyzing public-facing outputs and deducing proprietary algorithms or data patterns, what was once considered “not readily ascertainable” could become easily discoverable. However, as seen in cases like Compulife Software, Inc. v. Newman (11th Cir. 2024 and cert. denied in 2025), courts have shown a propensity to side with trade secret owners when new technologies present novel “improper means” for theft (e.g., a scraping attack of the Compulife’s website), even if the owner’s secrecy measures might seem less stringent in the face of such advanced attacks.

Employee Mobility and Generative AI. Company personnel now use AI tools (e.g. GitHub, Copilot, ChatGPT) that may unintentionally absorb, store, or reproduce proprietary information, and which can resurface at a competitor. In West Technology Group LLC v. Sundstrom (D. Conn. 2024), a former salesman used Otter AI to transcribe and extract confidential information both before and after termination. The court underscored that feeding proprietary information into third-party AI without authorization can constitute misappropriation.

Cloud Collaboration Tools & Data Leakage. With decentralized AI development and cloud-based workflows, trade secrets are exposed to a wider attack surface. Courts expect companies to take “reasonable measures” to maintain secrecy, and using AI in unsecure environments may fail that standard. In WeRide Corp. v. Kun Huang (N.D. Cal. 2019), former WeRide employees who had previously signed WeRide’s Proprietary Information and Inventions Agreement, allegedly downloaded and transferred over 800 MB of proprietary autonomous driving technology to a USB device, using an enterprise grade ephemeral messaging application to conceal relevant communications from discovery and other collaboration tools to build a rival startup based on the misappropriated technology.

LEGAL CHALLENGES AND GAPS

Robust trade secret programs can be particularly challenging to implement in the AI industry due to the complexity and integration of AI technologies.

Identifying Misappropriation. In remote and AI-enabled environments, proving access and intent becomes harder. Digital footprints may be dispersed across cloud platforms and personal devices, complicating forensic investigations. In Legend Biotech USA Inc. v. Liu (D.N.J. 2024), Legend Biotech USA alleged that Liu emailed Legend Biotech USA’s trade secrets and confidential information to his personal Gmail account and Legend terminated his employment shortly after these transfers. The court ordered Liu to deliver all of his devices, online accounts, and hard copy documents for forensic inspection. 

Jurisdictional Complexities. Remote workers may operate across multiple states or countries. As a result, enforcing non-disclosure agreements or pursuing DTSA claims can involve complex jurisdictional analysis and conflict-of-law questions. In Millenium Grp. of Delaware, Inc. v. Mikkola (D.N.J. 2024), Mikkola, a remote employee based in Texas, allegedly accessed the confidential information of his former employer, TMG, headquartered in New Jersey, via external storage devices on several occasions after his employment with TMG ended. The court found that New Jersey had personal jurisdiction over Mikkola.

Limits of Confidentiality Agreements. Many confidentiality agreements were drafted before the rise of AI tools or widespread remote work. The agreements may not clearly prohibit uploading sensitive data to generative AI platforms or account for BYOD (bring your own device) policies. MGA Home Healthcare Colorado, LLC v. Thun(D. Colo. 2023) involved MGA suing a former employee who had signed a confidentiality agreement, Thun, under the DTSA and related Colorado laws, for allegedly saving MGA confidential information to his personal cellphone in violation of MGA’s BYOD policy. When MGA terminated Thun’s employment, Thun confirmed that he did not possess or control any of MGA’s property, “including, but not limited to [MGA’s] documents, materials, computer disks and other records.” Thun moved to dismiss the case, but the court found that MGA had taken reasonable measures to preserve its trade secrets and other confidential information by storing the information on a “password protected, internal network” with access given only to select employees; requiring employees to agree to maintain the confidentiality of the information, including after termination; and the use of a BYOD policy that allowed access to confidential information but prohibited downloading the data onto the accessing device. Thun therefore breached his confidentiality agreement by downloading and saving confidential information from MGA’s secure network to his personal cell phone.

Over-reliance on Non-Compete Provisions. In jurisdictions that restrict non-compete agreements (like California), companies must rely solely on trade secret law to prevent unfair competition. Courts are increasingly scrutinizing such agreements post-FTC’s 2024 final rule banning most non-competes.

BEST PRACTICES

There is no universal approach for companies to address the above risks of AI usage. Trade secret owners should consider a multifaceted approach, which may include some or all of the following:

  • Update Policies and Agreements: Revise existing intellectual property policies, non-disclosure agreements (NDAs) and employment agreements to explicitly prohibit unauthorized AI usage and clarify remote work data handling expectations. These foregoing should explicitly prohibit reverse engineering, including AI-assisted methods.
  • Increase Awareness, e.g., with Employee Training: Educate employees on what constitutes a trade secret and the consequences of misappropriation. Train staff about the risks of uploading sensitive data to AI tools and collaboration platforms. Ensure that your technology teams understand how AI tools (e.g. GitHub Copilot) can unintentionally leak confidential code.
  • Secure your Remote Workspace. Remind your personnel to take work calls with headphones; store confidential or privileged documents securely, for example, in a filing cabinet; and have conversations with the people who are sharing the remote working location to make sure that they are aware of their duties of trust and confidence.
  • Use Technology Controls. Implement endpoint protection, VPNs, multi-factor authentication (MFA), and device encryption. Use data loss prevention (DLP) tools to monitor data movement and flag unusual behavior.
  • Restrict AI Interactions. Limit or sandbox employee access to AI tools when handling sensitive data. Use enterprise AI platforms with strict privacy guarantees and audit trails.
  • Implement Consistent Exit Protocols for Departing Employees. Conduct thorough offboarding, including review of devices and access logs. Use forensic tools to check for recent downloads or transfers of confidential data. Monitor departed employees through exit interviews and tracking subsequent employment.
  • Use Access Control and Segmentation. Enforce the principle of least privilege: only provide access to trade secrets on a need-to-know basis. Monitor access logs and perform regular audits of sensitive document access. Create visitor logs when meetings are held with people outside an organization and memorialize what was discussed including any trade secrets shared.
  • Document Your Trade Secrets and Access to Them. Maintain detailed trade secret access logs and associated documentation to aid in any future DTSA or state trade secret litigation. Regularly review your trade secret strategy with counsel to ensure swift action when misappropriation is suspected.
  • Use Technical Safeguards for AI Systems. Controlling the data used for training and inference, employing advanced encryption, and auditing AI model outputs for inadvertent disclosures. As evidenced by cases like West Technology Group LLC et al. v. Sundstrom (D. Conn. 2024), where an employee allegedly used an unauthorized AI program to record confidential meetings, clear policies on AI usage and comprehensive employee training are paramount.
  • Watermark proprietary datasets and models. Watermarking helps track unauthorized use of such proprietary datasets and models.
  • Audit Third Party Vendors. Keep tabs on company vendors, especially those that are using AI tools that could absorb confidential inputs and leak them elsewhere.
  • Monitor Public Releases. Consider using automated tools to detect when competitors release functionality that closely mirrors your proprietary system or software.

CONCLUSION

Protecting trade secrets in the age of remote work and artificial intelligence demands a proactive and evolving strategy. The very tools that enhance innovation—virtual collaboration and AI—can become liabilities if misused or poorly managed. By modernizing legal agreements, leveraging technology solutions, and fostering a culture of security awareness, companies can mitigate these risks.

Case law continues to evolve in this space, and courts are increasingly sympathetic to plaintiffs that demonstrate both reasonable protective measures and prompt enforcement actions. As work continues to decentralize and AI tools grow in sophistication, the burden is on organizations to ensure that their most valuable information assets remain truly secret.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up for any (or all) of our 25+ Newsletters.

 

Sign Up for any (or all) of our 25+ Newsletters