← All Resources

Policy Guide

How to Create an AI Usage Policy for Your Law Firm

By Fractal Legal · March 2026

A federal judge just ruled that documents created with consumer AI tools aren't protected by attorney-client privilege. Over 300 courts now require AI disclosure in filings. And 53% of law firms still have no AI policy at all.

If your firm is using AI without a written policy — and statistically, you probably are — you're exposed. Not hypothetically. Right now.

Here's how to fix that.

Why You Need a Policy Yesterday

The legal profession's relationship with AI shifted from theoretical to urgent in about 18 months. Three developments made it non-optional:

1. Privilege is at stake. In US v. Heppner (S.D.N.Y., Feb. 2026), Judge Rakoff ruled that documents prepared using Anthropic's Claude were neither privileged nor work product. The reasoning: by agreeing to the vendor's privacy policy, the user consented to third-party data exposure. If your attorneys are using ChatGPT or Claude's free tier for anything touching client matters, privilege may already be waived.

2. Courts are watching. Since Mata v. Avianca (S.D.N.Y., 2023) — where an attorney submitted six fabricated case citations from ChatGPT — over 300 judges have adopted AI disclosure requirements. In Park v. Kim (2d Cir., 2024), a lawyer was referred for disciplinary proceedings after citing a non-existent case generated by AI.

3. The bar requires it. ABA Formal Opinion 512 (July 2024) says managing partners must establish firm-wide AI policies and provide training. NYC Bar Opinion 2024-5 reinforces this under NY Rules 5.1 and 5.3. New York now requires AI competency CLE credits. These aren't suggestions — they're professional obligations.

The 10 Sections Every AI Policy Needs

A good policy is practical, not theoretical. It tells your people exactly what they can do, what they can't, and what happens when something goes wrong.

1. Scope and Definitions

Define what counts as "AI tools" — generative AI (ChatGPT, Claude, Gemini), legal research platforms (CoCounsel, Harvey), document automation, transcription tools, even email assistants with AI features. If it uses a language model, it's in scope. Specify who's covered: partners, associates, paralegals, legal assistants, IT staff, and outside contractors.

2. Approved Tools List

Maintain a whitelist of vetted, approved tools. Be explicit:

Review and update the list quarterly.

3. Confidentiality and Data Security

This is where most firms fail. Be specific:

4. Mandatory Verification

Every AI-generated output used in client matters or court filings must be independently verified by a licensed attorney. Non-negotiable components:

5. Court Disclosure Compliance

Maintain a current list of courts that require AI disclosure. Your policy should require attorneys to check disclosure requirements before filing in any court, provide a standard AI disclosure certification template, and designate someone responsible for tracking new disclosure orders.

6. Client Communication

ABA Opinion 512 requires disclosure to clients when AI use impacts how their matter is handled. Your policy should specify when to disclose AI use, how to obtain informed consent, and standard language for engagement letters addressing AI.

7. Billing Guidelines

NYC Bar Opinion 2024-5 says you cannot charge clients for time you saved by using AI without disclosure. Address whether AI tool costs are billed as overhead or per-use charges, how to handle dramatically reduced time on AI-assisted tasks, and required transparency when AI significantly reduces work time.

8. Supervision Structure

Assign clear responsibility: a designated AI compliance partner, practice group leads responsible for monitoring AI use, specific supervisory obligations for work delegated to junior attorneys or paralegals using AI, and regular audits of AI tool usage patterns.

9. Training Requirements

10. Incident Response

When things go wrong — and they will — your team needs to know exactly what to do: immediate steps when AI-generated errors are discovered in filed documents, notification chain, correction procedures for court filings, documentation requirements for malpractice insurance, and post-incident review process.

The Three Mistakes That Get Firms in Trouble

Mistake 1: The blanket ban. Some firms respond to AI risk by banning it entirely. This doesn't work. 69% of individual lawyers report using AI as of early 2026, up from 31% in 2025. A ban just drives usage underground where you can't supervise it.

Mistake 2: The tool-first approach. Firms buy an enterprise AI license and call it a day. Without a policy, training, and supervision structure, the tool is just a more expensive way to create risk.

Mistake 3: The static policy. Over 1,100 state AI bills were introduced in 2025 alone. Courts are issuing new disclosure requirements monthly. A policy you wrote in 2024 is already outdated. Build in quarterly reviews.

Start Here

If your firm doesn't have an AI policy today, start with these three steps:

  1. Audit current usage. Find out what tools your attorneys and staff are already using. You'll probably be surprised.
  2. Implement an immediate interim policy. At minimum: no client data in consumer AI tools, all citations independently verified, and disclosure in courts that require it.
  3. Build the full policy. Use the 10-section framework above. Get buy-in from partners. Train everyone. Review quarterly.

The firms that figure this out now will have a meaningful competitive advantage. The ones that don't will learn about it from their malpractice carrier.


Fractal Legal helps New York law firms build AI training programs and usage policies that actually work. We handle the research, drafting, training, and ongoing updates so you can focus on practicing law.

Need help building your firm's AI policy?

We draft custom AI usage policies aligned with ABA Opinion 512, NYC Bar guidance, and your firm's specific practice areas and risk profile.

Request Free Assessment