Three Questions Every Legal Team Should Ask Their AI Vendor
- Colin Levy
- 7 hours ago
- 3 min read
AI vendor evaluations follow a predictable pattern.
Legal teams build feature comparison matrices. They attend product demos focused on use cases. They negotiate pricing and implementation timelines. The RFP asks about data security, usually in a checkbox section near the back. Then implementation starts, and the foundational questions that should have been asked up front become urgent problems.

Three questions consistently get treated as boilerplate when they should be deal-breakers. These aren't theoretical concerns. They're grounded in what's happening right now with AI adoption in legal practice.
1. Where Is Our Data Stored, and What Are Your Data Residency Options?
Geography matters in legal practice. Data subject to GDPR can't sit on servers wherever it's convenient for the vendor. State bar opinions increasingly address cross-border data flows. Some clients require data to stay in specific jurisdictions.
Yet legal teams regularly discover mid-implementation that their vendor defaults to U.S.-based cloud storage without offering alternatives. Or the vendor provides multi-region options but charges premium fees. Some genuinely can't articulate where data actually lives because their infrastructure spans multiple sub-processors.
This isn't academic. The EU AI Act, which began phased implementation in August 2025, requires providers of general-purpose AI models to disclose detailed information about training data and storage practices. California's AB 2013, effective in 2026, mandates transparency about training datasets, including whether they're stored domestically or internationally.
If your vendor can't give you a straight answer about data residency, or if their only option conflicts with your compliance requirements, you have a fundamental mismatch.
2. Is Our Data Used to Train Your Models, and How Do We Opt Out?
The business models here vary dramatically. Some vendors explicitly state they don't use customer data for training. Others do by default, requiring customers to opt out. A few offer what's called "zero data retention" (ZDR) but only for enterprise customers who specifically request it.
Here's what actually happens: OpenAI and Anthropic both retain API inputs and outputs for 30 days by default for abuse monitoring. Enterprise customers can request ZDR for eligible endpoints, but it's not automatic and typically requires sales engagement. Consumer-tier products operate differently. ChatGPT retains conversation data unless users disable chat history, and even then, data stays for 30 days before deletion.
For legal practice, this creates real ethics questions. Using client data to train a commercial AI model typically requires informed client consent under most state ethics rules. Even if your engagement letters technically permit it, you're creating risk that needs evaluation.
Ask explicitly: Does our data train your models? If yes, is opt-out automatic or do we need to request it? What's the process? Is it permanent and retroactive, or only prospective?
The answers tell you whether the vendor understands legal professional responsibility or views your data as a product development resource.
3. What Happens to Our Data When We Leave?
Vendor relationships end. Better tools emerge, budgets shift, companies get acquired. When that happens, what becomes of your data?
Vendors vary significantly here. Some provide straightforward data export in standard formats (JSON, CSV) and complete deletion within 30 days. Others retain data for extended periods citing backup obligations or technical limitations. A few make export genuinely difficult, requiring manual processes or custom development.
This isn't just inconvenience. California's CCPA and similar privacy laws give individuals rights to data portability and deletion. If your vendor can't or won't facilitate clean exit, you're potentially exposing clients to ongoing risk even after you've moved on.
Ask for specifics: What formats can we export data in? How long does deletion take after we terminate? Do you provide deletion certificates? What data, if any, persists in backups, and for how long?
If a vendor becomes evasive about exit procedures, that tells you how they view the relationship.
Why This Matters Now
Legal teams are under real pressure to adopt AI. That's appropriate. These tools deliver genuine value. But speed can't mean skipping foundational due diligence.
These three questions surface whether a vendor has thought through the practical realities of serving legal clients. They won't catch every issue. But they'll quickly reveal whether you're dealing with a vendor who understands legal practice or one treating law firms like any other customer segment.
The ABA's Formal Opinion 512 from July 2024 makes this explicit: lawyers using generative AI must understand vendor practices around data storage, retention, and confidentiality before deployment. The opinion specifically notes that competent representation requires verifying vendor credentials and reviewing data handling policies.
We remain responsible for protecting client confidentiality and maintaining data security regardless of what tools we deploy. That responsibility doesn't pause because the technology is new.
Start with these three questions. The answers will tell you most of what you need to know about whether a vendor relationship will work long-term.


