Tyler is a practicing New York attorney. Ansgar leads AI strategy at Fractal Legal.
If you manage or own a small law firm in New York, you have probably had the same conversation a dozen times this year: "Should we be using AI?" The answer is almost certainly yes. But the more useful question — and the one this guide is designed to answer — is how.
We wrote this piece because most of what gets published about AI in law is aimed at BigLaw. The reality for a 5-to-30-attorney firm is different. Your budget is different. Your staffing is different. Your risk tolerance is different. This guide is for you.
What AI Tools Are Law Firms Actually Using in 2026?
Most law firms are using general-purpose AI assistants — primarily ChatGPT and Claude — alongside a growing number of legal-specific platforms. The tools break into two categories, and the right choice depends on your firm's size, budget, and practice areas.
General-purpose AI tools. ChatGPT (OpenAI) and Claude (Anthropic) remain the most widely adopted AI tools across the legal profession. A 2025 ABA survey found that 65% of lawyers who use AI rely on general-purpose tools rather than legal-specific platforms. These tools handle drafting, research summaries, brainstorming arguments, translating legalese into plain English, and dozens of other tasks. For small firms, they are the practical entry point because they cost $20–$25 per month per user and require no IT infrastructure.
Legal-specific AI platforms. At the enterprise level, tools like Harvey AI (backed by $100M+ in funding and used by firms including Allen & Overy) offer deep integration with legal workflows. Mid-market options include Clio AI (built into the practice management system many small firms already use), Lexis+ AI from LexisNexis, and CoCounsel from Thomson Reuters. These platforms offer better guardrails for legal work — citation checking, jurisdiction-specific research, and data isolation — but they come at a higher price point, often $100–$300 per user per month.
For most small firms, the practical starting point is a combination of ChatGPT or Claude for general drafting and research, plus your existing legal research platform's AI features (Lexis+ AI or Westlaw's CoCounsel). You do not need to buy Harvey. You need to learn how to use what is already available to you. If you want a structured introduction, our guide on AI training that actually works covers the fundamentals.
How Are Small Law Firms Using AI Differently Than Big Firms?
The biggest difference is not the tools — it is training, policy, and institutional support. Big firms have dedicated legal operations teams, innovation departments, and six-figure budgets for AI implementation. Small firms are figuring it out on the fly, often with one tech-curious partner leading the charge.
At AmLaw 100 firms, AI adoption looks like this: enterprise-wide licenses for legal-specific tools, mandatory training programs, detailed AI use policies reviewed by ethics counsel, and dedicated staff to manage vendor relationships and data security. A 2025 Thomson Reuters report found that 85% of large firms had formal AI governance policies in place.
At small firms, the picture is different. According to the same report, only 23% of firms with fewer than 50 attorneys had written AI use policies. Attorneys are experimenting with free or consumer-tier AI tools, often without firm-wide coordination. Some partners are using ChatGPT daily; others have never tried it. There is no training program. There is no policy.
This is the gap that creates real risk. The tools themselves are not the problem. A solo practitioner using Claude responsibly with a clear understanding of its limitations is in a better position than a mid-size firm where associates are pasting client data into free AI tools without anyone knowing. The gap is training and policy, not technology.
If your firm is in this position — using AI informally without a clear framework — the single most valuable step you can take is a structured AI readiness assessment to understand where you stand and what you need.
What Are the Most Common AI Use Cases for Lawyers?
Lawyers are using AI across four main categories of work, roughly ordered from most to least adopted. Each represents a real opportunity for small firms to save time and serve clients better.
Legal research and case analysis. This is the most natural fit for AI in legal practice. Tools like Lexis+ AI and CoCounsel can analyze a legal question, identify relevant authorities, and summarize holdings in minutes rather than hours. Claude and ChatGPT can help attorneys explore arguments, identify counterarguments, and draft research memos — though their outputs must be verified against primary sources. A 2025 LexisNexis study reported that AI-assisted research reduced average research time by 40–60% for routine legal questions.
Document drafting and review. AI excels at producing first drafts of contracts, motions, demand letters, and client correspondence. For contract review specifically, AI tools can identify non-standard clauses, flag missing provisions, and benchmark terms against market standards. The key principle: AI produces the first draft, the attorney produces the final product. Every AI-generated document must be reviewed, edited, and validated by a licensed attorney.
Client communication and intake. Small firms are using AI to draft client-facing communications, summarize case updates, and streamline intake processes. Some firms use AI chatbots on their websites to handle initial client inquiries and schedule consultations. Others use AI to translate complex legal concepts into plain-language explanations for clients. This saves time and improves client satisfaction — clients consistently rate clear communication as the most important factor in their experience with a law firm.
Internal operations. Billing narrative cleanup, calendar management, email triage, and administrative task automation represent the "quiet" use cases that save firms hours per week without touching any client-facing work. These are also the lowest-risk applications, making them ideal starting points for firms that are cautious about AI adoption.
What Risks Should Small Firms Watch Out For?
AI introduces real risks that small firms must manage proactively. Ignoring these risks does not make them go away — it makes them worse. Here are the four you need to address immediately.
Confidentiality and data security. When you type client information into ChatGPT's free tier, that data may be used to train future models. This is a clear confidentiality concern under Rule 1.6. The fix: use enterprise or professional tiers that offer data isolation (ChatGPT Team/Enterprise, Claude for Business), or use tools that process data locally. NYC Bar Formal Opinion 2024-5 draws a clear line between "open" AI systems that share data and "closed" systems that do not. Know which one you are using.
AI hallucinations in legal work. Large language models generate plausible-sounding text that can include fabricated case citations, invented statutes, and inaccurate legal standards. The cautionary tales are well documented: in 2023, two New York attorneys were sanctioned after submitting a ChatGPT-generated brief containing six fictitious case citations (Mata v. Avianca, S.D.N.Y. 2023). In 2024 and 2025, additional sanctions followed in cases across multiple jurisdictions. Every AI-generated legal assertion must be independently verified against primary sources. Period.
Ethics compliance. New York attorneys must comply with ABA Formal Opinion 512 (2024), which maps duties of competence, confidentiality, supervision, communication, and billing onto AI use. NYC Bar Formal Opinion 2024-5 adds specific guidance on data handling and client consent. If your firm has no written AI use policy, you are likely out of compliance with supervisory obligations under Rules 5.1 and 5.3. This is not optional.
Court disclosure requirements. The Southern and Eastern Districts of New York both have standing orders or local rules requiring disclosure of AI use in court filings. Judge Castel's standing order in the SDNY (effective 2023, updated 2024) requires attorneys to disclose AI-assisted research and drafting. Other judges have adopted similar requirements. Failure to disclose can result in sanctions. Before filing any AI-assisted document, check the specific judge's individual rules and standing orders.
How Do You Start Implementing AI at a Small Law Firm?
The firms that succeed with AI follow a structured approach rather than letting adoption happen randomly. Here is a five-step framework that works for firms of any size.
Step 1: Conduct an AI readiness assessment. Before you buy any tool or sign any license, understand where your firm stands. What are your highest-volume, most repetitive tasks? Where are your biggest time sinks? What does your current technology stack look like? Which attorneys are already using AI (they are — trust us)? A proper assessment identifies the use cases where AI will deliver the most value for your specific practice. We offer a free AI readiness assessment designed specifically for small and mid-size New York firms.
Step 2: Create an AI use policy. This is non-negotiable. Your policy should cover: which tools are approved, what data can and cannot be entered into AI systems, how AI-generated work must be reviewed and verified, disclosure requirements for court filings, and billing guidelines for AI-assisted work. It does not need to be 50 pages. Two to three pages of clear, practical rules will satisfy your ethics obligations and protect your firm.
Step 3: Train your entire team — not just one partner. The most common failure mode we see is the "AI champion" model, where one tech-savvy partner learns the tools and everyone else ignores them. Effective AI adoption requires firm-wide training that covers both the technology and the ethics obligations. Every attorney, paralegal, and legal assistant who might interact with AI needs to understand the rules. Our research on what actually works in AI training for law firms found that hands-on, practice-area-specific training produces 3x better adoption rates than generic webinars.
Step 4: Start with low-risk use cases. Do not begin by having AI draft a summary judgment motion in a high-stakes case. Start with internal work: cleaning up billing narratives, drafting internal memos, summarizing depositions for your own review, or generating first drafts of routine correspondence. Build confidence and competence before moving to higher-stakes applications.
Step 5: Monitor, measure, and iterate. Track what is working. How much time are attorneys saving? What error rates are they seeing? Which use cases are delivering real value and which are not worth the effort? Revisit your AI policy quarterly. The tools are evolving fast — what was a limitation six months ago may be solved today, and new risks emerge regularly.
Frequently Asked Questions
Is it ethical for New York lawyers to use AI tools like ChatGPT?
Yes, provided you follow the guardrails established by ABA Formal Opinion 512 and NYC Bar Formal Opinion 2024-5. You must maintain competence with the tools you use, protect client confidentiality by choosing appropriate data-handling tiers, supervise all AI-assisted work, and disclose AI use when required by court rules. Using AI is ethical. Using it carelessly is not.
How much does it cost to implement AI at a small law firm?
The technology itself is surprisingly affordable. ChatGPT Pro or Claude Pro costs $20–$25 per user per month. Legal-specific tools like Clio AI are often bundled with existing practice management subscriptions. Enterprise-grade platforms like Lexis+ AI or CoCounsel run $100–$300 per user per month. The real investment is in training and policy development. Budget $2,000–$10,000 for a proper training program and policy framework depending on firm size, with ongoing costs for refresher training as tools evolve.
Can AI replace junior associates at a small firm?
No. AI handles specific tasks, not roles. It can draft a first version of a contract or summarize a deposition transcript, but it cannot exercise legal judgment, manage client relationships, appear in court, or take ethical responsibility for legal work. What AI does is change the mix of work that junior associates do — less mechanical review and more analysis, strategy, and client interaction. Firms that use AI well find that their associates become more productive and deliver higher-value work, not that they become unnecessary.
Do I need to tell clients I am using AI?
It depends on the context. NYC Bar Opinion 2024-5 advises that if you are using an "open" AI system where client data may be processed by third parties, you should obtain informed client consent. For "closed" enterprise systems with strong data isolation, the disclosure obligation is less clear, but transparency is always the safer approach. Several courts in the Southern and Eastern Districts of New York also require disclosure of AI use in filings. As a best practice, consider adding a brief AI disclosure provision to your engagement letters.
What is the single best first step for a firm that has not started with AI?
Get a clear picture of where you stand. Take our free AI readiness assessment to identify your firm's specific opportunities and risks. From there, you can make informed decisions about tools, training, and policy rather than guessing. If you want hands-on guidance, join our April 10 AI workshop for New York law firms where we walk through implementation step by step with real examples from firms like yours.
This article was prepared by Tyler Johnson, Esq. and Ansgar Lange for the Fractal Legal team. For a confidential discussion about AI training and policy implementation for your firm, contact us.