Ethics & Compliance

NY Ethics on AI Disclosure: What Every Law Firm Needs to Know

The NYSBA Task Force report, NYC Bar opinions, and court rules have created a web of AI obligations for New York attorneys. Here's what they actually require — and a practical checklist for compliance.

March 2026 · 10 min read

If you manage a New York law firm, you've probably heard that "there are new AI ethics rules." You may have even skimmed a summary or two. But the reality is more complicated — and more urgent — than most summaries let on.

There is no single "AI ethics opinion" governing New York lawyers. Instead, there's a patchwork: a 90-page NYSBA Task Force report, two formal NYC Bar opinions, a new Unified Court System policy, a proposed Commercial Division rule, individual judge standing orders, and pending state legislation. Each addresses different aspects of AI use. Together, they create a set of obligations that most firms are not meeting.

This article cuts through the noise. We'll cover what each piece of guidance actually says, where most firms fall short, and give you a concrete 5-point checklist you can act on this week.

The Three Pillars of NY AI Ethics Guidance

1. NYSBA Task Force on AI (April 2024)

The New York State Bar Association published a nearly 90-page report from its Task Force on Artificial Intelligence, adopted by the House of Delegates. This is not a numbered ethics opinion — it's a set of guidelines and recommendations.

The four key recommendations:

A critical distinction: unlike Florida and California, the NYSBA Task Force did not require attorneys to obtain client consent before using AI. The Task Force suggests disclosure as best practice under Rule 1.4 (communication), but stops short of mandating it. This is important — it means New York's standard is arguably more permissive than other major jurisdictions, but it doesn't mean firms can ignore disclosure entirely.

2. NYC Bar Formal Opinion 2024-5 (August 2024)

This is the most comprehensive ethics opinion on generative AI issued in New York. It addresses 20 Rules of Professional Conduct — far more than any other state's guidance. The key obligations:

3. NYC Bar Formal Opinion 2025-6 (December 2025)

The most recent opinion addresses a specific and increasingly common scenario: AI tools that record, transcribe, and summarize attorney-client conversations — think Otter.ai, Fireflies, Microsoft Copilot, or similar meeting assistants.

If your firm uses any AI transcription tool on client calls, you must:

This opinion matters because AI meeting tools have become ubiquitous. Many attorneys are using them without any of these safeguards in place.

The Court Rules: A Patchwork Problem

Beyond ethics opinions, New York courts have been issuing their own AI guidance — and it's far from uniform.

The NY Unified Court System issued a formal AI policy in October 2025 for all judges and nonjudicial employees. It requires that AI never substitute for human judgment and that only UCS-approved AI products be used in court operations.

The Commercial Division proposed Rule 6(e) in June 2025, which takes a notably lighter approach: anyone filing material remains responsible for accuracy, but the rule deliberately avoids imposing new disclosure requirements. The Advisory Committee reasoned that in a sophisticated business court, unnecessary certification mandates would be counterproductive.

Individual judges, however, have issued their own standing orders — some requiring affirmative disclosure of AI use in drafting, others requiring certification that AI-generated documents were independently reviewed. There is no master list. Your firm has to check judge by judge.

Pending legislation (Senate Bill S2698) would amend the CPLR to require certification of filings produced using generative AI and mandatory disclosure of AI use in drafting briefs. It hasn't passed yet, but the direction is clear.

The practical problem: a brief filed in the Commercial Division may need no AI disclosure, while the same brief filed before a different judge in the same courthouse might require a signed certification. Firms without a system for tracking these requirements are exposing themselves to sanctions.

The Sanctions Are Real

This isn't theoretical. New York courts have already sanctioned attorneys for AI-related misconduct:

The pattern is consistent: courts treat unchecked AI output the same as any other filing deficiency. The attorney is responsible. "AI did it" is not a defense — it's an aggravating factor.

What Most Firms Get Wrong

Based on our work with New York firms, these are the most common compliance gaps:

  1. No written AI policy. Industry data suggests 79% of legal teams report AI adoption, but only about 10% have governance frameworks in place. Rules 5.1 and 5.3 make this a management-level obligation, not a suggestion. If you don't have a written policy, you're already behind.
  2. Treating AI output as reliable. Stanford HAI research found hallucination rates of 17–33% even in premium legal AI products. Every output used in client work or filings must be independently verified. "I checked the citations" needs to be documented, not assumed.
  3. Using consumer-grade AI with client data. Inputting client information into ChatGPT, Claude, or similar tools without enterprise agreements that contractually prohibit training on inputs is a Rule 1.6 problem. The free tier of any AI tool is almost certainly not safe for confidential information.
  4. No training program. Under Rule 5.3, the firm is liable for supervisee misconduct with AI. "We told them to be careful" is not a reasonable supervisory measure. You need documented training with clear policies on what's permitted and what isn't.
  5. Billing practices haven't adapted. Both ABA Opinion 512 and NYC Bar Opinion 2024-5 address this. If AI reduces a task from eight hours to two, billing eight hours is a Rule 1.5 issue. Firms need to rethink how they value and bill AI-assisted work.
  6. Not tracking court-specific requirements. With no uniform statewide rule, the judge-by-judge patchwork of AI disclosure orders creates a trap for firms that don't systematically check standing orders in every case.
  7. Relying on engagement letter boilerplate for consent. ABA Formal Opinion 512 explicitly states that boilerplate consent in engagement letters is not adequate for informed consent regarding AI use. If your only disclosure is buried in paragraph 14 of your engagement letter, it doesn't count.

The 5-Point AI Compliance Checklist

Here's what your firm should have in place right now:

1

Written AI Usage Policy

A firm-wide policy that specifies which AI tools are approved, what data can be inputted, how outputs must be verified, and what's prohibited. This is not optional under Rules 5.1 and 5.3. The policy should be signed by every attorney and staff member, reviewed quarterly, and updated as tools and guidance evolve.

2

AI Tool Vetting Process

Before any AI tool touches client data, evaluate: Where is data stored? What are retention periods? Does the vendor train on your inputs? Is there a right to deletion? Do you have an enterprise agreement with appropriate confidentiality protections? Document each evaluation. This applies to every tool — including meeting transcription services (per Opinion 2025-6).

3

Mandatory Verification Protocol

Every AI-generated output used in client work or court filings must be independently verified by a licensed attorney. This means checking every citation, every case quote, every statutory reference, and every factual assertion. Build verification into your workflow — make it a required step, not an afterthought. Document who verified and when.

4

Training Program for All Personnel

Partners, associates, paralegals, and administrative staff all need training on your AI policy. Cover what tools are approved, how to handle confidential data, verification requirements, and billing practices. Document attendance. Run refresher sessions at least twice a year. Under Rule 5.3, "I didn't know" from a supervisee is your problem, not theirs.

5

Court-Specific Compliance Tracking

Maintain a running log of AI disclosure requirements by judge and court. When a new matter is opened, check for standing orders or local rules that require AI disclosure or certification. Assign someone to monitor for new orders. Until New York adopts a uniform rule, this judge-by-judge tracking is the only way to stay compliant.

How New York Compares

For context, here's where New York sits relative to other major jurisdictions:

Jurisdiction Key Document Client Disclosure?
ABAFormal Opinion 512 (July 2024)Yes — informed consent recommended; boilerplate not adequate
New York (NYSBA)Task Force Report (April 2024)Suggested, not required
New York (NYC Bar)Opinions 2024-5 & 2025-6Required for AI transcription tools; existing Rule 1.4 duties for other AI
FloridaOpinion 24-1 (Jan 2024)Yes — disclosure and consent required
CaliforniaPractical Guidance (2024)Yes — disclosure required
TexasOpinion 705 (Feb 2025)No automatic requirement

New York occupies an unusual middle ground: the NYSBA guidance is more permissive than most states on client disclosure, but the NYC Bar opinions and individual court orders create stringent requirements in specific areas. The net effect is a compliance landscape that's harder to navigate than a single clear mandate would be.

What This Means for Your Firm

The window between "AI ethics guidance exists" and "firms are being sanctioned for non-compliance" has already closed. Mata v. Avianca happened in 2023. The Fourte sanctions happened in 2025. The next case could involve your firm or someone at your firm who didn't know the rules.

The good news: compliance is not complicated. It requires a written policy, tool vetting, verification protocols, training, and court tracking. Most firms can implement all five within 30 days.

The risk of inaction is concrete: sanctions, malpractice exposure, client trust erosion, and reputational damage. The cost of compliance is modest by comparison.


Need help building your firm's AI compliance framework?

Fractal Legal helps New York law firms implement AI usage policies, train their teams, and stay ahead of evolving ethics requirements. Our April 10 AI Compliance Workshop covers everything in this article — with hands-on exercises and templates you can use immediately.


This article is for informational purposes only and does not constitute legal advice. For guidance specific to your firm's situation, consult with a qualified attorney.