AI-powered legal research tools are no longer experimental. They are on associates' desktops, in solo practitioners' browser tabs, and — increasingly — in the crosshairs of judges and disciplinary committees. If you practice in New York and you have not yet developed a clear framework for how your firm uses these tools, you are already behind.
Here are five things every NY lawyer needs to understand right now.
1. There Is No Single Rule — and That Is the Problem
New York does not have a unified, statewide rule governing AI use in legal practice. What it has is a patchwork: the NYSBA Task Force Report (April 2024), the NYC Bar's Formal Opinion 2024-5, the ABA's Formal Opinion 512, individual judge standing orders across SDNY, EDNY, and state supreme courts, and the NY Unified Court System's interim AI policy that took effect in October 2025.
Each of these sources says something slightly different. The NYSBA guidelines do not require client consent before using AI — a more permissive stance than the ABA's Opinion 512, which demands informed consent that goes beyond boilerplate language in engagement letters. Meanwhile, SDNY judges like Vernon Broderick and John Cronan have their own disclosure and certification requirements that vary by courtroom.
What this means for you: Compliance is not a one-time checklist. It requires knowing which authority applies in your specific matter, before your specific judge, in your specific jurisdiction. If your firm handles matters in multiple courts, you need a system for tracking these requirements.
2. Verification Is Not Optional — It Is Your Professional Obligation
Every authoritative source — the NYSBA Task Force, NYC Bar Opinion 2024-5, ABA Opinion 512, and every judge who has issued a standing order — agrees on one point: lawyers must independently verify every citation, quotation, and legal proposition generated by AI.
This is not aspirational guidance. It is an enforceable duty under NY Rules of Professional Conduct 1.1 (Competence), 3.1 (Meritorious Claims), and 3.3 (Candor Toward the Tribunal).
The consequences of failing to verify are no longer hypothetical. In Mata v. Avianca (SDNY, 2023), attorneys were fined $5,000 for submitting six fabricated case citations generated by ChatGPT. In Park v. Kim (2d Cir., 2024), the Second Circuit referred an attorney to its Grievance Panel after a brief cited a nonexistent case. And in Ader v. Ader (NY Supreme Court, Commercial Division, October 2025), the court imposed monetary sanctions after counsel used "unvetted AI to defend his use of unvetted AI."
What this means for you: "The AI told me it was right" is not a defense. You are personally responsible for every word in every document you file. Build a verification step into your workflow before any AI-assisted research leaves your desk.
3. Confidentiality Risk Is More Serious Than You Think
NYC Bar Opinion 2024-5 is clear: AI platforms should be treated as third-party vendors receiving client data under Rule 1.6. You must use reasonable safeguards and, for anything beyond routine use, obtain advance client consent before inputting confidential information.
But the real wake-up call came in February 2026. In United States v. Heppner (SDNY), Judge Jed Rakoff ruled that documents generated through a public AI platform like ChatGPT are not protected by attorney-client privilege or work product doctrine. The reasoning was straightforward: AI is not a lawyer, there is no expectation of confidentiality on a consumer platform, and the communication was not made for the purpose of obtaining legal advice from an attorney.
This ruling means that if you or your client input case-specific information into a consumer AI tool, you may have waived privilege entirely — not just for that output, but potentially for the underlying information itself.
What this means for you: Know your firm's AI tools. Understand their data retention and training policies. Enterprise-grade legal AI platforms are a fundamentally different risk profile than consumer chatbots. If your associates are using free-tier AI tools for client work, you have an urgent problem to fix.
4. You Have a Duty to Train and Supervise
Rules 5.1 and 5.3 of the NY Rules of Professional Conduct impose supervisory obligations on partners and supervising lawyers. These rules apply to AI use just as they apply to any other aspect of legal practice.
This is not about banning AI. The NYSBA Task Force explicitly recommended prioritizing education over legislation, and the 2025 Access to Justice Report found that while awareness of AI is widespread, "skill development and structural support lag."
The NY Unified Court System has already mandated AI training for all judges and court employees. It is a matter of time before similar expectations are formalized for practicing attorneys. Firms that proactively train their teams will be ahead of the curve; firms that do not will be exposed to malpractice risk every time an associate uses an AI tool without understanding its limitations.
What this means for you: Develop a firm-wide AI use policy. Train every lawyer and staff member who touches AI tools. Document that training. This is not optional professional development — it is a supervisory obligation.
5. The Rules Are Getting Stricter, Not Looser
The trajectory is clear. Senate Bill S2698, introduced in January 2025, would require a separate affidavit disclosing AI use in any civil filing, along with a certification of human review. Governor Hochul signed the RAISE Act in December 2025, establishing the nation's first comprehensive safety governance regime for frontier AI model developers.
Meanwhile, sanctions are escalating. From fines in Mata to grievance referrals in Park v. Kim and Ader, courts are sending a consistent message: the grace period for AI-related mistakes is over.
What this means for you: The firms that will thrive are those that treat AI governance as a core operational priority — not an afterthought. Waiting for a uniform rule before taking action is not a strategy. It is a liability.
The Bottom Line
AI-assisted legal research is here to stay. Used well, it can make your practice more efficient and your work product stronger. Used carelessly, it can end your career.
The good news: the standards are knowable, the risks are manageable, and the steps you need to take are practical. The bad news: none of this happens automatically. It requires deliberate investment in training, policy, and oversight.
Start now. Your clients — and your disciplinary committee — will thank you.
This article is for informational purposes only and does not constitute legal advice.