With increasing generative AI adoption across the legal profession, prioritizing robust security and privacy measures is critical. Before using any generative AI tool, lawyers must fully understand the underlying technology, beginning with thorough due diligence of legal tech vendors.
In July 2024, the American Bar Association issued Formal Opinion 512, which provides some guidance on the proper review and use of generative AI in legal practice. The opinion underscores some of the ABA Model Rules of Professional Conduct that are implicated by lawyers’ use of generative AI tools. This includes the duty to deliver to competent representation, keep client information confidential, communicate generative AI use to clients, properly supervise subordinates in their use of generative AI, and to only charge reasonable fees.
Even before deploying generative AI tools, however, lawyers must understand a vendor’s practices. This includes verifying vendor credentials and fully reviewing policies related to data storage and confidentiality.
According to Formal Opinion 512, “all lawyers should read and understand the Terms of Use, privacy policy, and related contractual terms and policies of any GAI tool they use to learn who has access to the information that the lawyer inputs into the tool or consult with a colleague or external expert who has read and analyzed those terms and policies.” Lawyers may also need to consult IT and cybersecurity professionals to understand terminology and assess any potential risks.
In practice, this means carefully reviewing vendor contract terms related to a vendor’s limitation of liability, understanding if a vendor’s tool “trains” on your client’s data, assessing data retention policies (before, during, and after using the tool), and identifying appropriate notification requirements in the event of a data breach.
To further explore these ethical guidelines in practice, we spoke with legal technology executives about the security and privacy measures they implement, as well as best practices for lawyers when evaluating and vetting legal tech vendors.
What security measures do you take to protect client data?
- Troy Doucet, Founder @ AI.Law
Enterprise-expected security measures including SOCII, HIPAA, and robust encryption at rest and in transit for data. We also follow ABA guidance on AI, including confidentiality, not training our models on our users’ data, and making it clear that we do not own the data users input.
- Jordan Domash, Founder & CEO @ Responsiv
The foundation must be traditional security and privacy controls that have always been important an enterprise software. On top of that, we’ve built a de-identification process to strip out PII and corporate identifiable content before processing by an LLM. We also have a commitment to not have access to or train on client questions and content.
- Michael Grupp, CEO & Co-founder @ BRYTER
We have an entire team focused on security and compliance so the answer is of course, all of them: SOC 2 Type II, ISO27001, GDPR, CCPA, EU AI Act etc. And, BRYTER does not use client data to develop, train or fine-tune the AI models we use.
- Gil Banyas, Co-Founder & COO @ Chamelio
Chamelio safeguards client data through industry-standard encryption, SOC 2 Type II certified security controls, and strict access management with multi-factor authentication. We maintain zero data retention arrangements with third-party LLMs and employ continuous security monitoring with ML-based anomaly detection. Our comprehensive security framework ensures data remains protected throughout its entire lifecycle.
- Khalil Zlaoui, Founder & CEO @ CaseBlink
Client data is encrypted in transit and at rest, and is not used to train AI models. We enforce a strict zero data retention policy - no user data is stored after processing. A SOC 2 audit is nearing completion to certify that our security and data handling practices meet industry standards, and customers can request permanent deletion of their data at any time.
- Dorna Moini, CEO & Founder @ Gavel
Gavel was built for legal documents, so our security standards exceed those typical of software platforms. We use end-to-end encryption, private AI environments, and enterprise-grade access controls—backed by SOC II databases and third-party audits. Client data is never used for training, and our retention policies give firms full control, ensuring compliance and peace of mind.
- Ted Theodoropoulos, CEO @ Infodash
Infodash is built on Microsoft 365 and Azure and deployed directly into each customer's own tenant, which means we host no client data whatsoever. This unique architecture ensures that law firms always maintain full control over their data. Microsoft’s enterprise-grade security includes encryption at rest and in transit, identity management via Azure Active Directory, and compliance with certifications like ISO/IEC 27001 and SOC 2.
- Jenna Earnshaw, Co-Founder & COO @ Wisedocs
Wisedocs uses services that implement strict access controls, including role-based access control (RBAC), multi-factor authentication (MFA), and regular security audits to prevent unauthorized access to your data. Our organization employs configurable data retention policies as agreed upon with our clients. Wisedocs has achieved our Soc 2 Type 2 attestation, as well as established information security and privacy program in accordance with SOC 2, HIPPA, PIPEDA, PHIPA, as well as annual risk assessments and continual vulnerability scans.
- Daniel Lewis, CEO @ LegalOn
Security and privacy are top priorities for us. We are SOC 2 Type II, GDPR, and CCPA compliant, follow industry-standard encryption protocols, and use state-of-the-art infrastructure and practices to ensure customer data is secure and private.
- Gila Hayat, CTO & Co-Founder @ Darrow
Darrow is working mostly on the open web realm, utilizing as much as publicly available data as possible, surfacing potential matters from the open web. Our clients confidentiality and privacy is a must, therefore we adhere to security standards and regulations, and collect minimal data as possible to maintain trust. We take client confidentiality and privacy very seriously.
- Sigge Labor, CTO & Co-Founder @ Legora | Jonathan Williams, Head of France @ Legora
We exclusively use reputable, secure providers and AI models that never store or log data, with no human review or monitoring permitted. All vendors are contractually bound to ensure data is never retained or used for training in any form. This, in combination with ISMS certifications and adherence to industry standards, ensures robust data security and privacy.
- Gary Sangha, CEO @ LexCheck Inc.
We are SOC 2 compliant and follow rigorous cybersecurity standards to ensure client data is protected. Our AI tools do not retain any personally identifiable information (PII), and all data processing is handled securely within Microsoft Word, leveraging Azure’s built-in data protection. This ensures client data remains encrypted, confidential, and under the highest level of enterprise-grade security.
- Tom Martin, CEO & Founder @ Lawdroid
As a lawyer myself, I understand the fiduciary responsibility we have to handle our client data responsibly. At LawDroid, we use bank-grade data encryption, do not train on your data, and provide you with fine grain control over how long your data is retained. We also just implemented browser-side masking of personally identifiable information to prevent it from ever being seen.
Lawyers are very concerned about data privacy. What would you tell a lawyer who doesn't use legal-specific AI tools due to privacy concerns?
- Troy Doucet, Founder @ AI.Law
You have control over what you input into AI, so do not input data that you do not feel comfortable inputting. AI products vary in their functionality too, meaning different levels of concern. For example, asking AI about the difference between issue and claim preclusion is a low-risk event, versus mentioning where Jonny buried mom and dad in the woods.
- Jordan Domash, Founder & CEO @ Responsiv
You're right to be skeptical and critically consider a vendor before giving them confidential or privileged information! The risk is vendor-specific - not with the category. The right vendor designs the platform with robust data privacy measures in mind.
- Michael Grupp, CEO & Co-founder @ BRYTER
We have been working with the biggest law firms and corporates for years, and we know that trust is earned, not given. This means that first, we try to be over-compliant - so this means agreements with providers to protect attorney-client privilege. Second, we make compliance transparent. Third, we provide references to those who are already advanced in the journey.
- Gil Banyas, Co-Founder & COO @ Chamelio
Adopting new technology inevitably involves some privacy trade-offs compared to staying offline, but this calculated risk enables lawyers to leverage significant competitive advantages that AI offers to legal practice. Finding the right risk-reward balance means embracing innovation responsibly by selecting vendors who prioritize security, maintain zero data retention policies, and understand legal confidentiality requirements. Success comes from implementing AI tools strategically with appropriate safeguards rather than avoiding valuable technology that competitors are already using to enhance client service.
- Khalil Zlaoui, Founder & CEO @ CaseBlink
Not all AI tools treat data the same, and legal-specific platforms like ours are built with strict safeguards and guardrails. Data is never used to train models, and everything is encrypted, access-controlled, and siloed. Only clients can access their own data. They retain full ownership and control at all times, with the ability to keep information private even across internal teams.
- Dorna Moini, CEO & Founder @ Gavel
With consumer AI tools, your data may be stored, analyzed, or even used to train models—often without clear safeguards. Professional-grade and legal-specific tools like Gavel are built with attorney-client confidentiality at the core: no data sharing, no training on your client data inputs, and full control over retention. Avoiding AI entirely isn’t safer—it’s just riskier with the wrong tools (and that's not specific to AI!).
- Ted Theodoropoulos, CEO @ Infodash
Legal-specific platforms like Infodash are purpose-built with confidentiality at the core, unlike general-purpose consumer AI tools. These solutions are built with the privacy requirements of legal teams in mind. With new competitors like KPMG entering the market, delaying AI adoption poses a real competitive risk for firms.
- Jenna Earnshaw, Co-Founder & COO @ Wisedocs
Legal-specific AI tools are designed to be both secure and transparent, helping legal professionals understand and trust how AI processes their data while maintaining strict privacy controls. With human-in-the-loop (HITL) oversight, AI becomes a tool for efficiency rather than a risk, ensuring that outputs are accurate and reliable. By adopting AI solutions that follow strict security protocols such as SOC 2 Type 2, HIPAA, PIPEDA, and PHIPA compliance standards, legal teams can confidently leverage technology while maintaining control over their data through role-based access control (RBAC), multi-factor authentication (MFA), and configurable data retention policies.
- Daniel Lewis, CEO @ LegalOn
Ask questions about how your data may be used — will it touch generative AI (where, without the right protections, your content could display to others), or non-generative AI? If it’s being processed by LLMs like OpenAI, understand whether your data is being used to train those models and if it’s being used in non-generative AI use cases, understand how. The use of your data might make the product you use better, so consider the risk/benefit trade-offs.
- Gila Hayat, CTO & Co-Founder @ Darrow
Pro-tip for privacy preservation and worry-free experimentation with various AI tools: Have a non-sensitive or redacted document or use-case ready that you know the answers that you wouldn't expect - and benchmark the various tools against it to have a fair comparison and no stress over leaking random work documents.
- Sigge Labor, CTO & Co-Founder @ Legora | Jonathan Williams, Head of France @ Legora
Make sure to use a trusted vendor where no model training or fine-tuning is happening on client input.
- Gary Sangha, CEO @ LexCheck Inc.
Lawyers should first understand what information they are actually sharing when using legal specific AI tools, often it is not personally identifiable information or sensitive client data. In many cases, you are not disclosing anything subject to confidentiality, especially when working with redlined drafts or standard contract language. That said, if you are sharing sensitive information, it is important to review your firm’s protocols, but depending on what you are sharing, it may not be a concern.
- Tom Martin, CEO & Founder @ Lawdroid
Lawyers should be concerned about data privacy. But, steering away from legal-specific AI tools due to privacy concerns would be a mistake. If anything, legal AI vendors take greater security precautions than consumer-facing tools, given our exacting customer base: lawyers.
For security and privacy purposes, what should lawyers and law firms know about a legal AI vendor before using their product?
- Troy Doucet, Founder @ AI.Law
Knowing what they do to protect data, how they use your data, certifications they have, and encryption efforts are smart. However, knowing what your privacy and security needs are before using the product is probably the best first step.
- Jordan Domash, Founder & CEO @ Responsiv
I’d start with a traditional security and privacy review process like you’d run for any enterprise software platform. On top of that, I’d ask: Do they train on your data? Do they have access to your data? What is your data retention policy?
- Michael Grupp, CEO & Co-founder @ BRYTER
Even the early-adopters and fast-paced firms ask their vendors three questions: Where is the client data stored? Do you use the firm’s data, or client data, to train or fine-tune your models? How is legal privilege protected?
- Gil Banyas, Co-Founder & COO @ Chamelio
Before adopting legal AI tools, lawyers should verify the vendor has strong data encryption, clear retention policies, and SOC 2 compliance or similar third-party security certifications. They should understand how client data flows through the system, whether information is stored or used for model training, and if data sharing with third parties occurs. Additionally, they should confirm the vendor maintains appropriate legal expertise to understand attorney-client privilege implications and offers clear documentation of privacy controls that align with relevant bar association guidance.
- Dorna Moini, CEO & Founder @ Gavel
I did a post on what to ask your vendors here: https://www.instagram.com/p/C9h5jVYK5Zc/. Lawyers need clear answers on what happens to their data and how it's being used. When choosing a vendor, it's also important to understand output accuracy and the AI product roadmap as it relates to legal work - you are engaging in a marriage to a software company you know will continue to improve for your purposes.
- Ted Theodoropoulos, CEO @ Infodash
Firms should ask where and how data is stored, whether it’s isolated by client, and if it’s used for training. Look for vendors that run on secure environments like Microsoft Azure and support customer-managed encryption keys. Transparency around data flows and integration with existing infrastructure is essential.
- Jenna Earnshaw, Co-Founder & COO @ Wisedocs
Lawyers and law firms should ensure that any legal AI vendor follows strict security protocols, such as SOC 2 Type 2, HIPAA, PIPEDA, and PHIPA compliance, along with role-based access control (RBAC), multi-factor authentication (MFA), and regular security audits to protect sensitive legal data. They should ensure the AI vendor is not using third party models or sharing data with AI model providers and the deployment of their AI is secure and limited. Additionally, firms should assess whether the AI system includes human-in-the-loop (HITL) oversight to mitigate hallucinations and organizational risks, ensuring accuracy and reliability in legal workflows.
- Gila Hayat, CTO & Co-Founder @ Darrow
When choosing a legal AI vendor, it’s important to make sure it follows top-tier security standards and has a solid track record when it comes to protecting data.Don’t forget the contract: make sure it includes strong confidentiality terms so your clients’ data stays protected and compliant. Trusting the human and knowing the team: the legal tech scene is tight and personal, hop on a call with one of the team members to make sure you’re doing business with a trustworthy partner.
- Sigge Labor, CTO & Co-Founder @ Legora | Jonathan Williams, Head of France @ Legora
You should understand whether a vendor’s AI models are trained on user data, this is a critical distinction. Vendors that fine-tune or improve their models using client input may pose significant privacy risks, especially if sensitive information is involved. It’s important to evaluate whether specially trained or fine-tuned models offer enough added value to justify the potential trade-off in privacy.
- Gary Sangha, CEO @ LexCheck Inc.
Lawyers and law firms should understand what information they are sharing through the AI tool, as it is often personally identifiable information or subject to confidentiality. They should confirm whether the vendor is compliant with frameworks like SOC-2 which ensures rigorous controls for data protection and ensure that data is encrypted and securely processed. Reviewing how the tool handles data protection helps ensure it aligns with the firm's security and privacy policies.
- Tom Martin, CEO & Founder @ Lawdroid
Lawyers need to ask questions: 1) Do you employ encryption? 2) Do you train on data I submit to you? 3) Do you take precautions to mask PII? 4) Can I control how long the data is retained?
By carefully evaluating security credentials, vendor practices, and model usage policies, lawyers can responsibility and confidently employ generative AI tools to improve their delivery of legal services. As these technologies evolve, best practices for security and implementation will also evolve, making it important for lawyers to continue following industry updates and new best practices.