Due Diligence for Clients: Questions to Ask Your Lawyer About Their Use of Generative AI
client-adviceAI-ethicslegal-services

Due Diligence for Clients: Questions to Ask Your Lawyer About Their Use of Generative AI

JJordan Ellis
2026-05-15
22 min read

A client checklist for vetting law firms on generative AI, confidentiality, human review, liability, and ethical safeguards.

Generative AI is no longer a niche experiment in legal services. Recent industry reporting indicates that a substantial share of law firm attorneys are already using it, and adoption continues to accelerate. For clients, that creates a practical diligence question: not whether your firm uses AI, but how it uses it, who reviews it, and what protections apply when the output touches your matter. If you are a business buyer or small company selecting counsel, your goal is straightforward: preserve identity and access controls, protect operating discipline, and make sure AI use does not quietly weaken confidentiality, competence, or accountability.

This guide is a client-facing checklist for law firm due diligence. It is designed to help you ask better questions about generative AI, client confidentiality, model disclosure, attorney competence, AI policy, risk allocation, and the ethical use of AI in legal work. It also shows how to evaluate the answers you receive, so you can compare firms on more than just brand name and hourly rate. In the same way a procurement team would not buy software without testing controls, a legal buyer should not retain counsel without understanding how the firm governs AI use. For a useful contrast in evaluation discipline, see how teams build an enterprise AI evaluation stack and how others approach testing AI-generated output safely.

Why AI Due Diligence Matters When You Hire Counsel

Legal AI use can improve speed, consistency, and research coverage, but it can also introduce new failure modes. A model may hallucinate citations, miss jurisdiction-specific exceptions, or summarize a document in a way that strips away a critical limitation. Those errors are especially consequential in legal work because clients often rely on counsel’s output to make decisions, sign contracts, send demands, or negotiate under time pressure. When a firm uses AI, you are not just buying legal analysis; you are also buying the firm's control environment, review standards, and quality assurance process.

Clients should understand that “we use AI” can mean very different things. At one end, a firm may use AI only for administrative drafting, formatting, and non-substantive brainstorming. At the other, a firm may use it for legal research, issue spotting, first-draft memos, contract review, or litigation strategy support. Those are not equivalent uses, and the risk profile changes materially when AI is allowed to touch privileged material or client-specific advice. For analogies on matching tools to use case, consider how decision-makers compare a certified pre-owned, private seller, and dealer purchase: the label matters less than the actual assurances behind it.

Confidentiality is not automatic

Client confidentiality is one of the first questions to ask because AI workflows can create invisible data paths. If a lawyer pastes your contract, dispute summary, diligence notes, or internal strategy into a public or semi-public model, the firm must be prepared to explain what the provider does with the prompt, whether it trains on inputs, where data is stored, and whether access is restricted by contract and technical controls. You are not being difficult by asking; you are performing standard enterprise due diligence. A disciplined firm should be able to explain how it aligns AI use with prompt governance, data handling, and privilege protection.

This is especially important for small businesses that may have fewer internal controls than larger enterprises. If your company regularly shares NDAs, pricing data, payroll records, customer complaints, or M&A documents, even a short prompt can contain sensitive information. Ask whether the firm uses enterprise-grade accounts, whether prompts are isolated, whether retention is disabled or minimized, and whether the provider can use client data to improve the model. If the firm cannot answer in clear, non-technical language, that itself is a risk signal. Buyers who care about privacy should think as carefully about legal workflows as they do about buying a secure VPN or hardening a system through release controls.

Competence now includes supervision of AI tools

Attorney competence has always included staying current with the tools used to serve clients. In an AI environment, competence increasingly means knowing when a model is appropriate, when it is not, and how to verify the output before relying on it. A lawyer who uses AI but does not validate the results may be exposing the client to avoidable mistakes. That does not mean AI is inherently unsafe; it means the supervising attorney must treat it as a tool requiring oversight, not as a substitute for judgment.

A useful benchmark is whether the firm has translated policy into practice. Some firms may publish a sleek AI statement but fail to train lawyers on prompt hygiene, citation checking, or the handling of confidential information. Others may have robust review checklists but no documentation. You want the former only if the controls are real and repeatable. For a parallel example of policy-to-practice translation, see how teams move from certification to workflow in turning CCSP concepts into developer CI gates or how organizations implement structured training and KPIs.

The Core Questions to Ask in Lawyer Selection

What exact AI tools do you use on client matters?

Start with specifics. Ask the firm to identify the generative AI products, modules, or integrated features it uses in client work. The answer should distinguish between consumer-grade tools, enterprise subscriptions, embedded features in research platforms, and internal systems built on third-party models. You are looking for a practical inventory, not marketing language. Firms that truly understand their stack should be able to say which tools are used for drafting, which are used for research, and which are prohibited from touching client information.

Follow up by asking whether the firm uses the same model across all matters or selects tools by matter type, jurisdiction, or risk level. For example, a routine corporate memo may justify a different workflow than a high-stakes investigation, regulated industry matter, or litigation strategy review. This is similar to how operations teams map technology choices to use cases rather than buying one tool for everything. For additional context on tool selection and operational governance, the comparison in warehouse automation technologies is a useful reminder that capability without workflow design creates risk.

Do you disclose model use in your engagement terms?

Model disclosure should not be buried in a vague policy nobody reads. Ask whether the firm discloses AI use in the engagement letter, outside counsel guidelines, statement of work, or vendor schedule. If disclosure exists, read it carefully. It should explain whether AI may be used for research, drafting, summarization, review, or administrative support, and whether client consent is required for any category of use. The most useful disclosures are operational: they tell you what happens, under what controls, and with what limits.

Ask whether the firm gives clients the option to opt out of certain AI uses. A sophisticated client base may want different levels of consent depending on the sensitivity of the matter. Some clients may allow AI for formatting but not for legal analysis; others may permit AI-assisted research if all citations are verified. The point is to turn “disclosure” into a genuine control, not a symbolic gesture. This is similar to the difference between broad product descriptions and credible, limited claims in a market context such as how owners market unique homes without overpromising.

What is your human-review policy?

This is one of the most important questions on the list. Ask whether a licensed attorney reviews all AI-assisted work product before it is sent to you, filed, or used in a negotiation. Ask what the review includes: citation checking, legal analysis validation, factual verification, red-flag review, and style/formatting review. If the firm answers, “Everything is reviewed by a lawyer,” ask what that review looks like in practice and how it is documented.

Human review should be measured, not assumed. A real policy will define who reviews the work, when review happens, and what triggers escalation. For example, a first-draft research memo may require line-by-line verification of authorities, while a business correspondence draft may require a lighter substantive review. If the firm cannot describe those distinctions, it likely has not operationalized the policy. In other high-accountability fields, teams use explicit evaluation layers; for a useful analogy, see how analysts use structured methods in mapping analytics types or how creators convert research into repeatable outputs in turning analyst insights into content series.

How to Evaluate Confidentiality, Security, and Data Handling

Where does your data go, and who can access it?

Ask the firm to explain the full data path from prompt to storage to deletion. If your information is entered into a cloud-based model, who is the provider? Is the provider contractually barred from training on your content? Are logs retained, and if so, for how long? Is access limited to authorized firm personnel, or can the provider’s support team inspect content in certain circumstances? These are not abstract questions; they determine whether client data remains controlled or becomes part of a broader vendor ecosystem.

You should also ask whether the firm segregates matters by client, by workspace, or by tenant. Segregation matters because a control breakdown in one matter should not contaminate another. If the firm handles sensitive disputes, M&A, trade secrets, or regulated data, you may want to ask whether it uses private instances, enterprise enclaves, or approved tools only. The logic is the same as secure orchestration in other domains, such as embedding identity into AI flows and ensuring access follows the right user, not just the right login.

Do you prohibit client confidential information from training the model?

There is a major difference between using a tool that processes your input for immediate output and using a tool that retains your input to improve future models. Ask whether the firm has a written ban on entering client confidential information into consumer systems that may reuse prompts for training. Ask whether internal rules distinguish between public information, client-sensitive information, privileged content, and highly restricted data. A credible policy should define those categories plainly.

Then ask how the firm trains lawyers on those distinctions. Written policies are only useful when staff understand them. A small-business client should not have to assume everyone at the firm knows the difference between confidential material and non-sensitive marketing copy. Better firms will have examples, approved use cases, and a process for seeking permission. This kind of layered governance mirrors the careful risk thinking used in ethical AI instruction and in systems that require measurable controls rather than assumptions.

What is your retention and deletion policy for AI inputs and outputs?

AI prompts and outputs can become shadow records. Ask whether the firm stores prompts, output drafts, and intermediate work products in the matter file, in a knowledge system, or nowhere at all. Ask who can retrieve them later and under what circumstances. If the firm uses AI to help draft advice, the retained record may matter in a dispute about what was said, who reviewed it, or how an issue was analyzed.

Deletion matters too. Does the firm delete prompts after use? If the system retains logs, can the firm request deletion? What happens on matter closure? Does retention align with the firm’s records policy, or is the AI vendor’s default governing the lifecycle? Strong answers usually sound like records management, not improvisation. For a broader perspective on managing business risks tied to hidden inputs, see how teams model changing cost structures in pricing and contract decisions.

Questions About Accuracy, Hallucinations, and Professional Responsibility

One of the most visible risks in generative AI is fabricated or inaccurate citations. Ask whether the firm requires lawyers to verify every case, statute, regulation, or quotation generated by AI before use. Ask whether they cross-check against primary sources or trusted research databases. If the firm uses AI in legal research, the verification method should be particularly concrete: source checking, quotation checking, jurisdiction checking, and recency checking.

Clients should be wary of firms that treat citation errors as trivial because the downstream consequences can be severe. A single false authority can waste time, undermine a negotiation, or damage credibility with a court. In that sense, AI verification should be as disciplined as any other high-stakes quality control process. For practical comparison, the logic resembles the safeguard mindset behind explainable AI and the review discipline in safe testing of AI-generated SQL.

What happens if the model is wrong?

This question forces the firm to explain its escalation path. If AI produces an incorrect legal analysis, who catches it, and how? Does the firm have a second-review requirement for high-risk deliverables? Does it use issue-spotting checklists, fallback research processes, or escalation to a partner? Good firms will describe a layered process. Weak firms will say, in effect, that the lawyer “just knows what to look for,” which is not a sufficient control in an AI-assisted workflow.

Ask for examples of the kind of errors the firm has seen and how they were corrected. Real-world experience matters because it demonstrates whether the firm is learning from mistakes. You are not looking for a confession; you are looking for maturity. A firm that has updated its policy after a near miss is generally more trustworthy than one that claims it has never needed to refine anything. This is similar to the way operators learn from post-mortems in AI ambition and fiscal discipline conversations: progress comes from controls, not slogans.

How do you decide when not to use AI?

The strongest AI policies do not just say where AI may be used; they also say where it must not be used. Ask whether the firm has prohibited or restricted AI for privileged strategy, highly sensitive investigations, novelty questions, jurisdictions with uncertain authority coverage, or matters where absolute precision is essential. Ask whether the rule changes if the matter is time-sensitive, regulated, or adversarial. A firm that can articulate these limits is usually more sophisticated than one that promotes AI as a universal accelerator.

This is a crucial diligence point for businesses that need reliable advice under pressure. If your matter involves employee discipline, trade secrets, regulatory correspondence, or a potential lawsuit, you want counsel whose restraint is matched by judgment. That same selective discipline appears in high-performing operational systems, where teams decide what to automate and what to keep manual. For a useful analogy, see hybrid workflows and performance optimization that prioritize stability over novelty.

Who bears the risk if AI-assisted advice causes harm?

Clients often focus on output quality and forget to ask who bears the downside if AI-assisted work is wrong. You should ask whether the firm’s engagement letter, outside counsel terms, or master services agreement addresses liability for AI use. Does the firm disclaim responsibility for third-party model errors? Does it stand behind the work product regardless of the tools used? The answer matters because AI does not eliminate professional responsibility; it changes the facts of how that responsibility is carried out.

In negotiation, try to avoid language that shifts all model risk to the client while preserving the firm’s discretion to use AI however it wants. If the firm wants the efficiency benefits, it should also be willing to accept corresponding obligations to supervise, verify, and correct. This is the same commercial logic buyers use when assessing productization or reviewing royalty and consolidation risk: value and risk should be allocated together, not separately.

Should the engagement letter include AI-specific representations?

Yes, especially for material matters. Consider asking for written representations that the firm will: use approved tools only; keep client information out of training datasets where possible; maintain human supervision; verify cited authorities; and notify the client if its AI posture changes materially during the engagement. These promises can be tailored to the sensitivity of the matter and the sophistication of the client. Even a small company can ask for clear, plain-English protections.

You may also want a clause requiring notice before a new AI vendor is introduced into the workflow. That is particularly important if the firm relies on a mix of commercial tools and internal systems. If a new vendor changes data handling, geography, subprocessor access, or retention, your risk changes too. In procurement terms, this is similar to insisting on change control when a platform updates its operating model, as discussed in scaling AI as an operating model.

Can we negotiate indemnities or carve-outs?

Depending on the matter size and leverage, you may be able to negotiate indemnities or specific carve-outs if the firm violates agreed AI rules, mishandles confidentiality, or uses unapproved systems. In practice, some firms will resist broad indemnity language, but they may accept tailored remedies for unauthorized disclosures, material breaches of policy, or use of prohibited tools. If indemnity is not available, consider narrower contractual remedies, notice obligations, or audit rights. The key is to avoid ambiguity about what happens when AI governance fails.

For clients with concentrated risk exposure, especially startups, lenders, or operating businesses with sensitive data, contract specificity is not overkill. It is a control layer. In the same way that verified reviews reduce consumer uncertainty in service marketplaces, AI-specific contract language reduces uncertainty in legal procurement. The contract should make the firm’s promises testable.

A Practical Client Checklist for Selecting Counsel

Use this checklist before you sign

Before retaining a firm, ask for its written AI policy, client-facing disclosure language, and human-review process. Then compare those documents against what you were told in meetings. If the answers are inconsistent, ask follow-up questions until they align. A mature firm will welcome the diligence because it demonstrates that you understand your own risk.

It is also wise to ask for a sample redacted workflow. You do not need privileged detail, but you can request a description of how an AI-assisted memo is prepared, reviewed, and delivered. That can reveal whether the firm is relying on individual habits or a repeatable governance system. Buyers who want operational rigor should think of this process like due diligence on a vendor stack, not a casual intake call. For a comparable mindset on structured operations, see hybrid event design and platform integration.

Red flags that should slow you down

Several answers should cause caution. Be careful if the firm says it has “no policy” but “everyone uses common sense.” Be cautious if it cannot name the model providers it uses. Treat it as a serious issue if the firm refuses to explain human review, will not discuss retention, or says client confidentiality is “not a problem” without describing controls. A firm that cannot explain its own AI workflow is unlikely to manage yours carefully.

Other red flags include overconfidence, vague assurances, and the assumption that all AI use is either harmless or fully covered by malpractice insurance. Neither assumption is safe. Insurance may respond after a loss, but it does not prevent the loss. For a buyer-facing example of why process matters as much as product, review the approach to home office efficiency: constraints are best managed early, not after the fact.

How to compare firms on AI governance

When you compare firms, score them on the same criteria: disclosure clarity, tool transparency, data handling, human review, training, escalation, and contractual protections. You are not just choosing legal talent; you are choosing a control framework. A firm with slightly higher rates but stronger governance may be the cheaper option over time if it reduces the chance of rework, delay, or disclosure risk.

To help visualize that comparison, use the table below as a procurement aid. It is intentionally simple and designed for business buyers who want to compare firms quickly without losing nuance.

Due Diligence AreaStrong AnswerWeak AnswerWhy It Matters
Model disclosureNamed tools, use cases, and matter-level rules“We use AI sometimes”Lets you assess actual exposure
Client confidentialityEnterprise controls, no training on client data, clear retention limits“The vendor is secure”Confirms how data is handled
Human reviewDocumented attorney review before delivery“A lawyer looks at it”Shows quality control exists
Accuracy checksSource verification and citation validation required“The model is usually right”Reduces hallucination risk
Risk allocationAI-specific obligations, notice, and remedies in contractGeneric terms onlyClarifies who pays if something goes wrong

How Small Businesses Can Ask These Questions Efficiently

Keep the checklist short but specific

Small businesses often do not have the time or leverage to negotiate pages of AI language. That does not mean they should skip diligence. A concise written questionnaire can cover the essential points: which tools are used, whether client data is excluded from training, whether human review is mandatory, how citations are checked, and what notice the client receives if the AI setup changes. Even a one-page intake questionnaire can reveal a lot about the firm’s maturity.

If you are a smaller buyer, focus first on the highest-risk issues: confidential data, legal accuracy, and who signs off on the work. A clear answer on those three points will tell you more than a polished pitch deck. Think of it as the legal equivalent of choosing a service provider based on verified operational reliability rather than branding. The same principle appears in markets where trust is built by consistent process, such as choosing a pediatrician or choosing a plumber through verified reviews.

Document the answers you receive

Do not rely on memory. Save the firm’s responses, policy excerpts, and any engagement letter language that references AI. If there is ever a dispute later about what was disclosed, what consent was given, or whether a specific tool was approved, your file will matter. Documentation also helps if you change firms or expand the scope of the relationship later. It turns a verbal reassurance into something procurement and counsel can actually review.

For businesses that work across multiple stakeholders, documentation also makes internal approvals easier. Your operations team, finance lead, or founder can review a concise record of what was promised. This is especially useful where AI use intersects with other commercial controls like security, pricing, and vendor management. In that regard, the discipline resembles how teams structure accessory decisions or manage daily-practicality tradeoffs: the details determine the outcome.

Conclusion: Treat AI Use as a Governance Question, Not a Marketing Question

The best firms can explain their AI controls plainly

A law firm’s use of generative AI is not inherently good or bad. The difference between a safe deployment and a risky one usually comes down to governance: what tools are used, what data they can touch, who reviews the output, and how accountability is allocated when something goes wrong. Clients who ask these questions are not being adversarial. They are behaving like prudent buyers who understand that the delivery model matters as much as the deliverable.

The strongest firms will answer directly, show their policy, and be willing to adapt the engagement terms where appropriate. They will not hide behind buzzwords or vague assurances. That is the standard you should expect whether you are hiring a solo practitioner or a large regional firm. And if a firm cannot explain its own AI posture clearly, it is reasonable to keep looking.

Use this guide as part of your vendor selection process

As AI becomes more embedded in legal practice, client diligence will increasingly resemble enterprise procurement. The questions in this article can help you compare firms on the metrics that matter: confidentiality, transparency, review rigor, liability, and ethical use. Use them early, use them consistently, and insist on written answers for anything material. That approach will not eliminate every risk, but it will make those risks visible, which is the first step toward controlling them.

For a broader lens on how organizations adapt to new technology without losing discipline, you may also find value in guides like scaling AI as an operating model, building an AI evaluation stack, and teaching AI ethically. In legal services, as in any high-stakes profession, trust is earned by process.

Pro Tip: Ask for the firm’s AI policy before the first substantive strategy meeting. If the firm hesitates, that hesitation is data.
FAQ: Client Due Diligence on Lawyer Use of Generative AI

1) Is it reasonable to ask a law firm whether it uses generative AI?

Yes. It is a normal procurement and risk question. You are entitled to understand whether AI is used on your matters, how it is supervised, and whether your information is protected. Firms that use AI responsibly should be prepared to answer these questions clearly.

2) What is the most important question to ask first?

Start with: “What client information, if any, is entered into generative AI tools, and how is it protected?” That question goes directly to confidentiality, training use, retention, and vendor access, which are often the highest-risk issues.

3) Should I ask whether a lawyer personally reviews all AI-generated work?

Yes. Human review is essential. Ask whether a licensed attorney verifies legal authorities, factual claims, and legal conclusions before anything is delivered or filed. If the answer is vague, ask for a specific workflow.

4) Can I request that AI not be used on my matter?

Often, yes. Many firms can accommodate client preferences, especially for sensitive matters. You can request an opt-out from certain AI uses or a more limited approval structure. Whether the firm agrees may depend on the type of work and the contract terms.

5) What contract language should I look for?

Look for disclosure of AI use, commitments around confidentiality, human review obligations, notice before introducing new tools, and language addressing liability or remedies if AI governance fails. For higher-risk engagements, ask for AI-specific representations rather than generic boilerplate.

6) What if the firm says its AI use is covered by malpractice insurance?

Insurance is not a substitute for governance. It may help after a loss, but it does not prevent confidentiality breaches, citation errors, or unauthorized disclosures. You should still ask about policy, supervision, and contract protections.

Related Topics

#client-advice#AI-ethics#legal-services
J

Jordan Ellis

Senior Legal Content Strategist

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

2026-05-15T02:45:38.183Z