← All Resources

Compliance

5 Things Every Lawyer Should Know About AI-Assisted Legal Research

By Fractal Legal · March 2026

AI-powered legal research tools are no longer experimental. They are on associates' desktops, in solo practitioners' browser tabs, and — increasingly — in the crosshairs of judges and disciplinary committees. If you practice and you have not yet developed a clear framework for how your firm uses these tools, you are already behind.

Here are five things every lawyer needs to understand right now.


1. There Is No Single Rule — and That Is the Problem

There is no unified, statewide rule governing AI use in legal practice. What exists is a patchwork: professional standards reports, ethics guidance, ABA guidance, individual judge standing orders across federal and state courts, and court system interim AI policies.

Each of these sources says something slightly different. Some professional standards do not require client consent before using AI — a more permissive stance than ABA guidance, which demands informed consent that goes beyond boilerplate language in engagement letters. Meanwhile, federal court judges like Vernon Broderick and John Cronan have their own disclosure and certification requirements that vary by courtroom.

What this means for you: Compliance is not a one-time checklist. It requires knowing which authority applies in your specific matter, before your specific judge, in your specific jurisdiction. If your firm handles matters in multiple courts, you need a system for tracking these requirements.


2. Verification Is Not Optional — It Is Your Professional Obligation

Every authoritative source — professional standards, ethics guidance, ABA guidance, and every judge who has issued a standing order — agrees on one point: lawyers must independently verify every citation, quotation, and legal proposition generated by AI.

This is not aspirational guidance. It is an enforceable duty under your ethical obligations, including duties of competence, meritorious claims, and candor toward the tribunal.

The consequences of failing to verify are no longer hypothetical. In Mata v. Avianca (Fed. Ct., 2023), attorneys were fined $5,000 for submitting six fabricated case citations generated by ChatGPT. In Park v. Kim (2d Cir., 2024), the Second Circuit referred an attorney to its Grievance Panel after a brief cited a nonexistent case. And in Ader v. Ader (State Supreme Court, Commercial Division, October 2025), the court imposed monetary sanctions after counsel used "unvetted AI to defend his use of unvetted AI."

What this means for you: "The AI told me it was right" is not a defense. You are personally responsible for every word in every document you file. Build a verification step into your workflow before any AI-assisted research leaves your desk.


3. Confidentiality Risk Is More Serious Than You Think

Ethics guidance is clear: AI platforms should be treated as third-party vendors receiving client data under your confidentiality obligations. You must use reasonable safeguards and, for anything beyond routine use, obtain advance client consent before inputting confidential information.

But the real wake-up call came in February 2026. In United States v. Heppner (Fed. Ct.), Judge Jed Rakoff ruled that documents generated through a public AI platform like ChatGPT are not protected by attorney-client privilege or work product doctrine. The reasoning was straightforward: AI is not a lawyer, there is no expectation of confidentiality on a consumer platform, and the communication was not made for the purpose of obtaining legal advice from an attorney.

This ruling means that if you or your client input case-specific information into a consumer AI tool, you may have waived privilege entirely — not just for that output, but potentially for the underlying information itself.

What this means for you: Know your firm's AI tools. Understand their data retention and training policies. Enterprise-grade legal AI platforms are a fundamentally different risk profile than consumer chatbots. If your associates are using free-tier AI tools for client work, you have an urgent problem to fix.


4. You Have a Duty to Train and Supervise

Supervisory duties impose obligations on partners and supervising lawyers. These duties apply to AI use just as they apply to any other aspect of legal practice.

This is not about banning AI. Professional standards have explicitly recommended prioritizing education over legislation, and the 2025 Access to Justice Report found that while awareness of AI is widespread, "skill development and structural support lag."

Courts are increasingly adopting AI training requirements for judges and court employees. It is a matter of time before similar expectations are formalized for practicing attorneys. Firms that proactively train their teams will be ahead of the curve; firms that do not will be exposed to malpractice risk every time an associate uses an AI tool without understanding its limitations.

What this means for you: Develop a firm-wide AI use policy. Train every lawyer and staff member who touches AI tools. Document that training. This is not optional professional development — it is a supervisory obligation.


5. The Rules Are Getting Stricter, Not Looser

The trajectory is clear. Senate Bill S2698, introduced in January 2025, would require a separate affidavit disclosing AI use in any civil filing, along with a certification of human review. Governor Hochul signed the RAISE Act in December 2025, establishing the nation's first comprehensive safety governance regime for frontier AI model developers.

Meanwhile, sanctions are escalating. From fines in Mata to grievance referrals in Park v. Kim and Ader, courts are sending a consistent message: the grace period for AI-related mistakes is over.

What this means for you: The firms that will thrive are those that treat AI governance as a core operational priority — not an afterthought. Waiting for a uniform rule before taking action is not a strategy. It is a liability.


The Bottom Line

AI-assisted legal research is here to stay. Used well, it can make your practice more efficient and your work product stronger. Used carelessly, it can end your career.

The good news: the standards are knowable, the risks are manageable, and the steps you need to take are practical. The bad news: none of this happens automatically. It requires deliberate investment in training, policy, and oversight.

Start now. Your clients — and your disciplinary committee — will thank you.


This article is for informational purposes only and does not constitute legal advice.

Want help navigating the AI compliance landscape?

We help law firms build the training programs, policies, and oversight systems that today's evolving rules demand.

Request Free Assessment