If you manage a New York law firm, you've probably heard that "there are new AI ethics rules." You may have even skimmed a summary or two. But the reality is more complicated — and more urgent — than most summaries let on.
There is no single "AI ethics opinion" governing New York lawyers. Instead, there's a patchwork: a 90-page NYSBA Task Force report, two formal NYC Bar opinions, a new Unified Court System policy, a proposed Commercial Division rule, individual judge standing orders, and pending state legislation. Each addresses different aspects of AI use. Together, they create a set of obligations that most firms are not meeting.
This article cuts through the noise. We'll cover what each piece of guidance actually says, where most firms fall short, and give you a concrete 5-point checklist you can act on this week.
The Three Pillars of NY AI Ethics Guidance
1. NYSBA Task Force on AI (April 2024)
The New York State Bar Association published a nearly 90-page report from its Task Force on Artificial Intelligence, adopted by the House of Delegates. This is not a numbered ethics opinion — it's a set of guidelines and recommendations.
The four key recommendations:
- Adopt the AI guidelines in the report and create a standing committee for periodic updates
- Prioritize education over legislation — train judges, lawyers, law students, and regulators
- Legislatures should identify gaps in existing law before enacting new AI regulation
- Examine the function of law itself as a governance tool for AI
A critical distinction: unlike Florida and California, the NYSBA Task Force did not require attorneys to obtain client consent before using AI. The Task Force suggests disclosure as best practice under Rule 1.4 (communication), but stops short of mandating it. This is important — it means New York's standard is arguably more permissive than other major jurisdictions, but it doesn't mean firms can ignore disclosure entirely.
2. NYC Bar Formal Opinion 2024-5 (August 2024)
This is the most comprehensive ethics opinion on generative AI issued in New York. It addresses 20 Rules of Professional Conduct — far more than any other state's guidance. The key obligations:
- Competence (Rule 1.1): Lawyers must review all AI outputs — especially legal citations and analysis — for accuracy before using them in client work or court filings.
- Confidentiality (Rule 1.6): Without informed client consent, lawyers must not input confidential information into AI systems that share data with third parties or use inputs for model training.
- Supervision (Rules 5.1 & 5.3): Firms must implement policies and training for lawyers and staff on acceptable AI use. Generative AI is treated as the functional equivalent of a nonlawyer assistant — its output must be supervised accordingly.
- Candor to the tribunal (Rule 3.3): Lawyers remain fully responsible for the accuracy of all submissions. AI hallucinations submitted to court constitute a violation.
- Billing (Rule 1.5): Firms must charge reasonable fees. You cannot bill for eight hours of work on a memo that AI helped draft in two.
3. NYC Bar Formal Opinion 2025-6 (December 2025)
The most recent opinion addresses a specific and increasingly common scenario: AI tools that record, transcribe, and summarize attorney-client conversations — think Otter.ai, Fireflies, Microsoft Copilot, or similar meeting assistants.
If your firm uses any AI transcription tool on client calls, you must:
- Obtain client consent before recording
- Evaluate the tool's security — where data is stored, retention periods, whether it trains on your data, deletion rights
- Independently verify any transcript or summary for accuracy
- Consider privilege implications — recordings may be discoverable
This opinion matters because AI meeting tools have become ubiquitous. Many attorneys are using them without any of these safeguards in place.
The Court Rules: A Patchwork Problem
Beyond ethics opinions, New York courts have been issuing their own AI guidance — and it's far from uniform.
The NY Unified Court System issued a formal AI policy in October 2025 for all judges and nonjudicial employees. It requires that AI never substitute for human judgment and that only UCS-approved AI products be used in court operations.
The Commercial Division proposed Rule 6(e) in June 2025, which takes a notably lighter approach: anyone filing material remains responsible for accuracy, but the rule deliberately avoids imposing new disclosure requirements. The Advisory Committee reasoned that in a sophisticated business court, unnecessary certification mandates would be counterproductive.
Individual judges, however, have issued their own standing orders — some requiring affirmative disclosure of AI use in drafting, others requiring certification that AI-generated documents were independently reviewed. There is no master list. Your firm has to check judge by judge.
Pending legislation (Senate Bill S2698) would amend the CPLR to require certification of filings produced using generative AI and mandatory disclosure of AI use in drafting briefs. It hasn't passed yet, but the direction is clear.
The practical problem: a brief filed in the Commercial Division may need no AI disclosure, while the same brief filed before a different judge in the same courthouse might require a signed certification. Firms without a system for tracking these requirements are exposing themselves to sanctions.
The Sanctions Are Real
This isn't theoretical. New York courts have already sanctioned attorneys for AI-related misconduct:
- Mata v. Avianca (S.D.N.Y. 2023): Two attorneys at Levidow, Levidow & Oberman used ChatGPT to draft a brief containing six fabricated case citations with fictitious quotations. They were fined $5,000 jointly and required to mail the sanctions opinion to their client. Judge Castel wrote that the attorneys "abandoned their responsibilities."
- Park v. Kim (2d Cir. 2024): The Second Circuit stated that "citation in a brief to a non-existent case suggests conduct that falls below the basic obligations of counsel."
- Benjamin v. Costco Wholesale: A $1,000 penalty for AI-generated fake citations in a reply brief.
- Fourte (NY Supreme Court 2025): An attorney filed a brief with hallucinated citations, then filed a responsive brief to the sanctions motion that also contained new hallucinated citations. Costs and fees imposed jointly and severally.
The pattern is consistent: courts treat unchecked AI output the same as any other filing deficiency. The attorney is responsible. "AI did it" is not a defense — it's an aggravating factor.
What Most Firms Get Wrong
Based on our work with New York firms, these are the most common compliance gaps:
- No written AI policy. Industry data suggests 79% of legal teams report AI adoption, but only about 10% have governance frameworks in place. Rules 5.1 and 5.3 make this a management-level obligation, not a suggestion. If you don't have a written policy, you're already behind.
- Treating AI output as reliable. Stanford HAI research found hallucination rates of 17–33% even in premium legal AI products. Every output used in client work or filings must be independently verified. "I checked the citations" needs to be documented, not assumed.
- Using consumer-grade AI with client data. Inputting client information into ChatGPT, Claude, or similar tools without enterprise agreements that contractually prohibit training on inputs is a Rule 1.6 problem. The free tier of any AI tool is almost certainly not safe for confidential information.
- No training program. Under Rule 5.3, the firm is liable for supervisee misconduct with AI. "We told them to be careful" is not a reasonable supervisory measure. You need documented training with clear policies on what's permitted and what isn't.
- Billing practices haven't adapted. Both ABA Opinion 512 and NYC Bar Opinion 2024-5 address this. If AI reduces a task from eight hours to two, billing eight hours is a Rule 1.5 issue. Firms need to rethink how they value and bill AI-assisted work.
- Not tracking court-specific requirements. With no uniform statewide rule, the judge-by-judge patchwork of AI disclosure orders creates a trap for firms that don't systematically check standing orders in every case.
- Relying on engagement letter boilerplate for consent. ABA Formal Opinion 512 explicitly states that boilerplate consent in engagement letters is not adequate for informed consent regarding AI use. If your only disclosure is buried in paragraph 14 of your engagement letter, it doesn't count.
The 5-Point AI Compliance Checklist
Here's what your firm should have in place right now:
Written AI Usage Policy
A firm-wide policy that specifies which AI tools are approved, what data can be inputted, how outputs must be verified, and what's prohibited. This is not optional under Rules 5.1 and 5.3. The policy should be signed by every attorney and staff member, reviewed quarterly, and updated as tools and guidance evolve.
AI Tool Vetting Process
Before any AI tool touches client data, evaluate: Where is data stored? What are retention periods? Does the vendor train on your inputs? Is there a right to deletion? Do you have an enterprise agreement with appropriate confidentiality protections? Document each evaluation. This applies to every tool — including meeting transcription services (per Opinion 2025-6).
Mandatory Verification Protocol
Every AI-generated output used in client work or court filings must be independently verified by a licensed attorney. This means checking every citation, every case quote, every statutory reference, and every factual assertion. Build verification into your workflow — make it a required step, not an afterthought. Document who verified and when.
Training Program for All Personnel
Partners, associates, paralegals, and administrative staff all need training on your AI policy. Cover what tools are approved, how to handle confidential data, verification requirements, and billing practices. Document attendance. Run refresher sessions at least twice a year. Under Rule 5.3, "I didn't know" from a supervisee is your problem, not theirs.
Court-Specific Compliance Tracking
Maintain a running log of AI disclosure requirements by judge and court. When a new matter is opened, check for standing orders or local rules that require AI disclosure or certification. Assign someone to monitor for new orders. Until New York adopts a uniform rule, this judge-by-judge tracking is the only way to stay compliant.
How New York Compares
For context, here's where New York sits relative to other major jurisdictions:
| Jurisdiction | Key Document | Client Disclosure? |
|---|---|---|
| ABA | Formal Opinion 512 (July 2024) | Yes — informed consent recommended; boilerplate not adequate |
| New York (NYSBA) | Task Force Report (April 2024) | Suggested, not required |
| New York (NYC Bar) | Opinions 2024-5 & 2025-6 | Required for AI transcription tools; existing Rule 1.4 duties for other AI |
| Florida | Opinion 24-1 (Jan 2024) | Yes — disclosure and consent required |
| California | Practical Guidance (2024) | Yes — disclosure required |
| Texas | Opinion 705 (Feb 2025) | No automatic requirement |
New York occupies an unusual middle ground: the NYSBA guidance is more permissive than most states on client disclosure, but the NYC Bar opinions and individual court orders create stringent requirements in specific areas. The net effect is a compliance landscape that's harder to navigate than a single clear mandate would be.
What This Means for Your Firm
The window between "AI ethics guidance exists" and "firms are being sanctioned for non-compliance" has already closed. Mata v. Avianca happened in 2023. The Fourte sanctions happened in 2025. The next case could involve your firm or someone at your firm who didn't know the rules.
The good news: compliance is not complicated. It requires a written policy, tool vetting, verification protocols, training, and court tracking. Most firms can implement all five within 30 days.
The risk of inaction is concrete: sanctions, malpractice exposure, client trust erosion, and reputational damage. The cost of compliance is modest by comparison.
Need help building your firm's AI compliance framework?
Fractal Legal helps New York law firms implement AI usage policies, train their teams, and stay ahead of evolving ethics requirements. Our April 10 AI Compliance Workshop covers everything in this article — with hands-on exercises and templates you can use immediately.
This article is for informational purposes only and does not constitute legal advice. For guidance specific to your firm's situation, consult with a qualified attorney.