The Competence Gap
Ninety-five percent of legal leaders are using or implementing AI-enabled software.1 Only ten percent of law firms have formal AI governance policies. That gap is not just a management failure — it is an ethics problem.
New York Rule of Professional Conduct 1.1 requires lawyers to “provide competent representation to a client,” which includes “the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.” Comment 8 to ABA Model Rule 1.1 — adopted by an increasing number of jurisdictions — makes the technology dimension explicit: a lawyer should “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.”
When the technology in question is generative artificial intelligence — capable of drafting briefs, summarizing depositions, reviewing contracts, and conducting legal research in seconds — the competence obligation takes on new urgency. Lawyers who use these tools without understanding them risk violating Rule 1.1. Lawyers who refuse to learn about them may face a different competence question: whether willful ignorance of available technology constitutes a failure to provide the “thoroughness and preparation reasonably necessary” for modern legal practice.
This article provides a practical framework for mid-size New York law firms — those with five to seventy-five attorneys — to build an AI training program that satisfies Rule 1.1 and the growing body of AI-specific ethics guidance.
The Regulatory Landscape in New York
Three overlapping sources of authority now define what competent AI use looks like for New York lawyers.
NYC Bar Formal Opinion 2024-5
Issued in August 2024, this opinion identifies the ethical obligations of lawyers and law firms relating to generative AI.2 It maps existing Rules of Professional Conduct onto AI use, identifying duties across confidentiality, competence, supervision, candor, client communication, and fees. Its core message: lawyers must understand “to a reasonable degree how the technology works, its limitations, and the applicable Terms of Use and other policies governing the use and exploitation of client data by the product.”
The opinion explicitly requires that lawyers not rely on AI-generated information without independent verification. It further distinguishes between “open” AI systems that may share data with third parties and “closed” enterprise systems — a distinction with direct implications for confidentiality obligations under Rule 1.6.
Notably, the NYC Bar adopted a “guardrails, not hard-and-fast restrictions” approach, recognizing that rigid rules would be outdated before the ink dried. This makes competence-through-training, rather than compliance-through-prohibition, the operative framework.
NYSBA Task Force on Artificial Intelligence
The NYSBA Task Force released its Report and Recommendations in April 2024, representing the most comprehensive guidance from a state bar association at the time.3 Its four principal recommendations prioritize education over legislation:
- Adopt guidelines for attorney use of AI and generative AI
- Prioritize education of judges, lawyers, law students, and regulators
- Identify risks not addressed by existing law for potential new regulation
- Examine the function of law as a governance tool for AI
The Task Force’s emphasis on education is particularly relevant for firm leadership. The report envisions a profession that learns to apply existing ethical frameworks to new technology — not one that waits for regulators to issue specific AI rules. For managing partners, this means the obligation to train attorneys on AI use exists now, under current rules, regardless of whether specific AI regulations follow.
ABA Formal Opinion 512
Released in July 2024, ABA Formal Opinion 512 was the national profession’s first comprehensive ethics guidance on generative AI.4 It addresses six areas: competence (Model Rule 1.1), confidentiality (Model Rule 1.6), communication with clients (Model Rule 1.4), candor toward the tribunal (Model Rules 3.1 and 3.3), supervisory responsibilities (Model Rules 5.1 and 5.3), and reasonable fees.
Two points from Opinion 512 deserve special attention from firm leadership:
Supervision is an affirmative obligation. Under Rules 5.1 and 5.3, partners with managerial authority must establish firm-wide policies on AI use and ensure that both lawyers and non-lawyers are adequately supervised in their use of AI tools. This is not aspirational language — it is an existing ethical requirement that simply had no AI-specific application until now.
Billing must reflect reality. If an AI tool reduces a ten-hour contract review to two hours of attorney time, the firm cannot bill for ten. Opinion 512 also clarifies that lawyers generally may not charge clients for time spent learning how to use AI tools, unless the client specifically requested the use of a particular tool.
Court-Level Requirements
Beyond ethics opinions, individual judges and court systems have created enforceable AI requirements. The NY Unified Court System mandated AI training for all court personnel effective October 2025. While this directive applies to court staff rather than attorneys, it signals the judiciary’s expectation that all participants in the legal system — including attorneys appearing before the court — understand AI.
Multiple judges in the Southern and Eastern Districts of New York have issued standing orders or individual rules requiring:
- disclosure when AI tools are used in preparing court filings
- certification that AI-generated content has been independently verified
- confirmation that all filings comply with the requirements of Rule 11
A proposed New York State Senate bill would formalize these requirements, mandating that any attorney who used generative AI to draft legal documents disclose that use upon filing and certify that a human reviewed the submission.5
For firms litigating in federal courts, the practical takeaway is clear: attorneys need to understand their AI disclosure obligations on a judge-by-judge basis, and firm policy must account for this patchwork of requirements.
A Three-Phase Training Framework
The following framework is designed for firms that need to move from ad hoc AI use to a defensible, documented training program. It is calibrated for firms with five to seventy-five attorneys and limited internal technology resources.
Phase 1: Assessment and Policy (Weeks 1–4)
Audit current use. Before training anyone, understand what is already happening. Survey every attorney and staff member: Which AI tools are you using? How often? For what tasks? With what client data? In most firms, the results will be surprising. Attorneys are already using ChatGPT, Copilot, and similar tools — often without firm knowledge or approval.
Draft the AI use policy. A firm AI policy is not optional under Rules 5.1 and 5.3; it is a supervisory obligation. At minimum, the policy should address:
- Approved tools — which AI tools may be used, under what conditions, and which are prohibited
- Confidentiality protocols — rules for when and how client data may be entered into AI systems, distinguishing between open and closed platforms per NYC Bar Opinion 2024-5
- Verification requirements — mandatory independent verification of all AI-generated legal research, citations, and factual claims
- Client disclosure — when and how to inform clients about AI use in their matters
- Supervision chain — who reviews AI-assisted work product and at what stage
- Billing guidelines — how to record and bill time for AI-assisted work
- Court compliance — procedures for checking and complying with judge-specific AI disclosure orders
- Incident response — what to do when an AI tool produces incorrect output that reaches a client or court
Communicate the policy. Distribute it. Require attestation. Make it accessible. A policy that sits in a drawer protects no one.
Phase 2: Training (Weeks 4–8)
Training should not be a single CLE lecture. It should be role-specific, hands-on, and connected to actual workflows.
For partners and supervisory attorneys: Focus on supervisory obligations under Rules 5.1 and 5.3. Partners need to understand what competent AI oversight looks like — not because they will use the tools most, but because they are ethically responsible for those who do. Cover the firm’s AI policy, red flags to watch for in AI-assisted work product, and the specific disclosure requirements of courts where the firm regularly appears.
For associates: Focus on practical tool use within ethical guardrails. This means hands-on workshops using the firm’s approved tools against real (anonymized) legal workflows: drafting with AI assistance, conducting AI-augmented research, reviewing AI-generated output for hallucinations and errors, and properly documenting AI use for billing and court compliance.
For paralegals and staff: Focus on the confidentiality and data-handling aspects. Non-lawyers using AI tools are subject to attorney supervision under Rule 5.3. Training should cover which tools are approved, what information may and may not be entered, and how to escalate questions about AI use.
For all personnel: Cover the fundamentals that everyone in the firm must understand:
- How generative AI works at a conceptual level (probabilistic text generation, not reasoning)
- Why AI “hallucinations” occur and how to detect them
- The confidentiality risks of entering data into cloud-based AI tools
- The firm’s AI use policy and individual responsibilities under it
Phase 3: Maintenance (Ongoing)
The AI regulatory landscape is moving fast. Three new court orders or ethics opinions on AI can arrive in a single quarter. A training program that is not maintained is a training program that creates false confidence.
Quarterly policy reviews. Assign a responsible partner or committee to review the AI use policy quarterly against new ethics opinions, court orders, and legislation. Update and redistribute as needed.
Continuing education. Schedule at least two AI-focused training sessions per year beyond the initial rollout. These should address new tools, new rules, and lessons learned from the firm’s own AI use.
Incident tracking. Maintain a log of AI-related issues — hallucinated citations caught in review, confidentiality questions, billing disputes, court compliance near-misses. These incidents are training material. They are also evidence of a firm that takes its supervisory obligations seriously.
Onboarding integration. Every new hire — attorney or staff — should receive AI training as part of onboarding. The policy attestation should be part of the standard intake process.
Common Failure Modes
Firms that have attempted AI adoption without a structured training program tend to fail in predictable ways:
The “unofficial tool” problem. Associates use ChatGPT or similar tools on their personal devices, outside the firm’s security environment, without supervision or policy guidance. The firm has no visibility, no control, and no defensible position if something goes wrong.
The “one-and-done” training. A single CLE session on “AI and the Law” does not satisfy the ongoing supervisory obligations of Rule 5.1. Competence is not a box to check — it is a continuous duty that requires continuous training.
The “policy without training” mistake. Some firms have drafted AI policies without providing the training necessary to comply with them. A policy that prohibits “entering confidential client information into unapproved AI tools” is useless if attorneys do not understand which tools are approved, which are not, and why.
The “ban everything” overcorrection. A blanket prohibition on AI use does not satisfy Rule 1.1’s competence requirement. If AI tools offer material benefits to clients — faster research, lower costs, more thorough review — a firm that prohibits their use may face questions about whether it is providing the “thoroughness and preparation reasonably necessary for the representation.”
Conclusion
The duty of competence has always required lawyers to keep pace with the tools of the profession. When typewriters gave way to word processors, when Westlaw replaced the library stacks, when email displaced the postal service, the profession adapted — not because regulators mandated it, but because competent practice demanded it.
Generative AI is the same kind of inflection point, but faster. The ethics opinions are clear. The court requirements are multiplying. The enforcement actions — from Mata v. Avianca to the growing number of sanctions for unverified AI-generated citations — provide cautionary examples.
For managing partners and firm leadership, the path forward is straightforward: assess your firm’s current AI use, draft a policy that meets your ethical obligations, train every attorney and staff member against that policy, and maintain the program as the rules evolve. This is not an innovation initiative. It is a compliance requirement under rules that already exist.
The firms that build this infrastructure now — while the regulatory landscape is still forming — will be better positioned than those that wait for a crisis to force the issue. The NYSBA Task Force had it right: education first, legislation later. The time for that education is now.
Need help building your firm’s AI training program?
Start with a free AI Readiness Assessment — we’ll map your current exposure and recommend a concrete action plan.
Start Free AI Readiness AssessmentEndnotes
- Thomson Reuters, 2025 Generative AI in Professional Services Report.
- New York City Bar Association, Committee on Professional Ethics, Formal Opinion 2024-5: Ethical Obligations of Lawyers and Law Firms Relating to the Use of Generative Artificial Intelligence in the Practice of Law (August 2024).
- New York State Bar Association, Report and Recommendations of the Task Force on Artificial Intelligence (April 2024).
- American Bar Association, Standing Committee on Ethics and Professional Responsibility, Formal Opinion 512: Generative Artificial Intelligence Tools (July 2024).
- New York State Senate Bill S2698 (2025), proposing mandatory AI disclosure requirements for attorneys filing documents with New York courts.