On Agentic AI for Legal Teams
Introduction AI agents represent a meaningful shift in how legal teams use technology. Traditional generative AI responds to prompts by producing content. Agents go further. They act. By combining language models with goals, tools, and memory, agents execute multi-step workflows across connected systems. For legal teams, this expands automation beyond isolated tasks into coordinated processes. Agents can review intake requests, extract key facts, validate deadlines against matter management systems, propose task assignments, and draft communications. Each step informs the next. This is not conversational assistance. It is software operating toward defined outcomes. That capability creates leverage by reducing coordination work and allowing professionals to focus on judgment-intensive decisions. It also introduces new risk. Because agents operate across systems, weak permissions, unclear escalation logic, or poor controls can propagate errors quickly. This article explains how agentic systems work, where they are already being used in legal operations, and how to adopt them responsibly with a focus on security and governance. Understanding the Shift Most legal teams encounter generative AI through tools that summarize contracts, assist with research, or support drafting. These systems follow a simple pattern: prompt in, response out. Agentic AI operates differently. Instead of producing a single response, an agent pursues a goal across multiple steps. It can call tools, access systems, retain context, and decide what to do next based on results along the way. The difference is clear when comparing a chatbot asked to summarize a contract with an agent instructed to process contract intake. The chatbot produces a summary. The agent may classify the request, select the appropriate template, compare terms against standards, flag deviations, route the matter to the correct reviewer, and log the action. This shift matters because it changes both capability and risk. Agents manage workflows, not just tasks. They move across systems, not just within one. Governance models must reflect that expanded scope. What Makes an Agent An AI agent consists of four core elements: a language model, tools, memory, and goals. The language model provides reasoning. It interprets instructions, plans steps, and generates outputs such as summaries or drafts. While this is the same underlying technology used in chatbots, here it functions as part of a larger system. Tools enable action. They allow agents to search databases, read documents, send messages, update records, and interact with enterprise platforms. Each tool is an access point that requires authorization and logging. Memory provides continuity. It allows agents to track prior actions, maintain context, and avoid repeating work. Memory may persist only within a task or across sessions, raising questions about retention and handling of sensitive information. Goals direct behavior. Unlike chatbots, which react to prompts, agents work toward defined objectives. Who sets those objectives and how they are constrained are core governance concerns. How Agents Operate Agents operate in iterative loops. At each step, the agent assesses its current state, determines the next action, executes a tool, evaluates the result, and decides whether to continue. This loop continues until the objective is met or a stopping condition applies. For example, an agent preparing a contract review package may identify the agreement type, retrieve the relevant playbook, compare terms, flag deviations, draft a summary, route the materials, and record the activity. Each decision depends on what the agent learned in the prior step. Agents are adaptive rather than scripted. They adjust based on inputs and outcomes. This flexibility distinguishes them from traditional automation and explains why governance must emphasize oversight, constraints, and accountability rather than static rules alone. Agent Architectures and Governance Implications Single-agent systems assign all steps to one agent. These systems work well for focused workflows such as intake processing or document review. Governance is relatively straightforward because permissions and audit trails are centralized. Multi-agent systems divide work among specialized agents, coordinated through an orchestration layer. This mirrors how legal teams operate but introduces governance challenges around data handoffs, accountability, and agent-to-agent communication. Human-in-the-loop designs embed review at defined decision points. Agents handle routine processing but pause before taking consequential actions. These checkpoints define acceptable autonomy. Too many reduce efficiency. Too few increase exposure to error. Practical Use Cases in Legal Legal teams are already deploying agents across several workflow categories, each with distinct governance requirements. Contract Intake and Triage Agents can classify submissions, extract metadata, validate completeness, and route matters based on risk or value. Governance focuses on routing logic and access control. Playbook Compliance Review Agents can compare agreements against internal standards and flag deviations. Version control is critical to avoid applying outdated positions. Due Diligence Coordination Agents can track requests, monitor data rooms, and route documents to subject matter experts. Strong access controls and audit trails are essential. Legal Research Assembly Agents can retrieve authorities and assemble structured research outputs. Governance must address cost controls, citation quality, and currency. Matter Management Updates Agents can update systems of record based on communications and documents. Write access requires strict safeguards to prevent inaccurate records. Compliance Monitoring Agents can track regulatory updates and contractual obligations, generating alerts and routing tasks with documented human oversight. Security Architecture for Agent Deployment Agent deployments extend across systems in ways that traditional security models may not anticipate. Agents should have distinct identities, narrowly scoped permissions, and clear audit trails. Network segmentation and API gateways limit exposure and provide control points for monitoring. Data protection depends on classification, encryption, and minimization. Not all data should be accessible to agents, and retention should be limited. When third-party services are involved, teams must assess where processing occurs, who can access logs, and how data is retained. Sensitive workflows may justify on-premises deployment. Governance Frameworks for Agent Operations Security addresses access. Governance addresses accountability, quality, compliance, and oversight. Policies should define acceptable use, approval thresholds, and prohibited applications. Standards translate policy into measurable requirements. Monitoring should track performance, quality, compliance, and escalation behavior. Incident response plans should address detection, containment, and remediation of agent errors. Training ensures stakeholders understand what agents do, where limits apply, and how accountability is assigned. Agents support work. Humans remain responsible. Risk Management Considerations Agents should operate with minimum necessary permissions enforced technically, not just documented. Escalation paths must reflect value, risk, and uncertainty and route matters to appropriate expertise.Audit trails should allow reconstruction of actions and decisions. Agents are not accountable entities. Stopping conditions prevent runaway execution. Ongoing testing detects model drift and failure modes.Vendor dependencies require contingency planning for outages, pricing changes, or service degradation. Implementation Guidance Effective adoption progresses from simple to complex. Start by mapping workflows and identifying friction points. Begin with bounded tasks that have clear inputs and outputs. Maintain human review during early deployment.Measure outcomes such as time savings, error rates, rework, and escalation frequency. Use evidence to refine configurations and expand scope gradually. Agent deployment is an ongoing discipline, not a one-time implementation. Concluding Thoughts Agentic AI changes what automation can accomplish in legal work. Agents coordinate, maintain context, and execute multi-step processes across systems. That power requires deliberate governance. Clear permissions, escalation logic, accountability, and security controls enable scale without sacrificing trust. Governance does not slow adoption. It enables sustainable adoption. When done well, agents extend human judgment rather than replacing it. Glossary AI Agent Software that combines a language model with goals, tools, and memory to execute multi-step tasks autonomously within defined limits. Language Model The reasoning component that interprets instructions, plans actions, and generates outputs such as summaries, drafts, or classifications. Tool Use An agent’s ability to interact with external systems, including document repositories, databases, email, and enterprise applications. Agent Memory The mechanism that allows an agent to retain context about prior actions and decisions during or across workflows. Human-in-the-Loop A workflow design where agents pause at defined points for human review before taking consequential actions. Permission Boundary The enforced scope of systems, data, and actions an agent is authorized to access or perform. Audit Trail A detailed record of agent actions and tool usage that supports review, accountability, and incident response. Frequently Asked Questions How is an agent different from a chatbot? A chatbot responds to a single prompt. An agent pursues a goal across multiple steps, uses tools, retains context, and decides what to do next. What legal workflows benefit most from agents? Work that involves coordination rather than judgment, such as intake triage, document routing, compliance tracking, and structured reviews. Do agents operate without human oversight? They can, but responsible deployments define when agents must pause and escalate, especially for high-risk or high-value actions. Are agents secure by default? No. Security depends on how permissions, system access, monitoring, and logging are designed and enforced. Who is accountable for agent actions? The organization remains accountable. Agents execute instructions but do not assume responsibility. Can agents access sensitive or privileged information? Only when explicitly permitted. Many teams restrict agents from handling certain data types by design. Where should teams begin with agent adoption? Start with narrow, well-defined tasks, maintain human review early, and expand scope based on observed performance.
Generative AI for Legal Teams
Introduction Most confusion about AI in legal teams comes from misunderstanding how these systems operate. When mechanics remain opaque, risk discussions drift toward extremes. Either AI feels magical, or it feels reckless. Neither position supports responsible adoption. Clarity about how generative models, agents, and multimodal systems function changes everything. Once teams understand what these systems are doing under the hood, conversations about use cases, guardrails, and accountability become more grounded and productive. Consider a common scenario: a team enables AI-powered contract summarization and quickly sees faster turnaround on first drafts. Weeks later, inconsistent summaries begin to surface because no one defined review standards or ownership. The issue is not the technology. It is the absence of process design around it. Understanding mechanics helps teams design workflows that assume review, judgment, and accountability from the start. How Generative AI Works Traditional AI focuses on classification. It answers questions like whether an email is spam or which category a document belongs to. Generative AI is fundamentally different. It produces new output such as text, images, code, and summaries. During training, models analyze massive volumes of data and adjust internal numerical weights to become highly effective at predicting likely continuations. They do not store concepts or beliefs. They learn statistical relationships between words, shapes, sounds, and patterns. When you provide a prompt, your input converts into internal representations. The model then generates output step by step. At each moment, attention mechanisms determine which parts of the context matter most. This allows responses to remain coherent across long passages or complex instructions. Outputs feel intelligent because models absorb patterns from human-created material. But they do not understand meaning or truth. They estimate plausibility. This distinction explains both their strengths and their limitations. For example, a system can rapidly draft a commercial clause based on thousands of similar agreements. It can sound confident and well structured. But it cannot assess whether that clause aligns with your risk tolerance, business objectives, or regulatory obligations. Those decisions remain human responsibilities. Generative AI excels at drafting, summarizing, organizing information, translating, and accelerating first passes. It cannot guarantee accuracy, reason from first principles, or validate its own output without external tools. Hallucinations occur because models optimize for likelihood, not correctness. From Models to Agents A basic generative model answers questions. An AI agent works toward goals. Agents combine a generative model with access to tools and systems, along with memory that tracks prior steps. Agents operate in loops. They perceive the current state, reason about what to do next, act by calling tools or executing tasks, observe the results, and repeat. This allows them to analyze documents, pull data from systems, generate drafts, update records, and coordinate multi-step workflows. Imagine an intake workflow where an agent reviews incoming requests, extracts key facts from attached documents, checks deadlines in a matter system, proposes task assignments, and drafts a response email. Each step relies on tools, memory, and structured decision-making layered on top of generative output. As systems become more complex, agents specialize. One may focus on research, another on planning, another on execution. An orchestration layer coordinates their work. This mirrors how real teams operate and enables scalable automation without relying on a single general-purpose assistant. This capability increases efficiency, but it also expands risk. Because agents can act across connected systems, teams must define permissions, escalation paths, and stopping points with care. Multimodal Systems Legal work rarely arrives as clean text. It comes as contracts, emails, screenshots, images, recordings, and video. Multimodal systems process all of these inputs together. Each modality is encoded into numerical representations and fused into a shared internal space. From that combined context, the system can generate outputs in any modality. A model might analyze a contract, inspection photos, and voice notes at once, then produce a unified summary or draft. Consider a compliance review that includes written policies, screenshots from internal systems, and recorded interviews. A multimodal system can ingest all of this material simultaneously and surface patterns that would otherwise require manual cross-referencing. This capability mirrors how humans integrate information and allows AI systems to operate across the messy realities of legal workflows. It also raises important questions about data handling, retention, and review that teams must address explicitly. What This Means for Legal Teams The objective is not technical mastery. It is precision. Legal teams must understand where AI adds leverage and where it introduces risk. Drafting and summarization benefit from generative systems. Final review, risk tolerance, and accountability remain human responsibilities. Agents require clear constraints because they can act across connected systems. Multimodal intake reshapes investigations, compliance, and contract review by consolidating fragmented inputs. A realistic pattern emerges: teams gain speed on early work, but outcomes improve only when they pair that speed with structured review. Without defined checkpoints, AI output drifts from organizational standards. Design workflows that assume human-in-the-loop review at every material decision point. Treat AI as an extension of professional judgment, not a substitute for it. Practical Guidance Ask focused questions when evaluating AI-enabled tools. Where does the system rely on probabilistic output? Which steps require mandatory human validation? What tools and systems can agents access? How is multimodal data handled and retained? Who owns outcomes after implementation? Pair technical evaluation with operational design. Assign responsibility for monitoring performance, updating workflows, and addressing failure modes. Measure outcomes, not just usage. Durable value does not come from demonstrations. It comes from workflow ownership, review discipline, clear accountability, and ongoing iteration. Conclusion Generative models create content. Agents plan and act. Multimodal systems unify messy inputs. Together, they define modern AI platforms. Understanding these mechanics sharpens use cases, clarifies guardrails, and improves risk conversations. Technology can surface issues faster and reduce operational drag. It cannot decide what matters. AI should extend judgment, not replace it. That balance defines effective legal work. Glossary Generative AI: AI systems that create new content by predicting likely continuations based on patterns learned from data. Large Language Model (LLM): A generative model trained on large volumes of text to produce human-like language output. Embeddings: Internal numerical representations that capture meaning and context so models can work with language, images, or audio. Attention: A mechanism that helps models determine which parts of input matter most at each step of generation. Hallucination: Confidently generated output that is incorrect or unsupported, caused by probabilistic generation rather than fact checking. AI Agent: Software that combines a generative model with goals, tools, and memory to perform multi-step tasks. Agent Orchestration: Coordination of multiple specialized agents so each handles part of a larger workflow. Multimodal Systems: AI systems that process text, documents, images, audio, and video together as one context. Human-in-the-Loop: Workflow design where people review, approve, or guide AI outputs at key decision points. Frequently Asked Questions Does generative AI replace legal judgment? No. These systems accelerate drafting and analysis, but decisions about risk, strategy, and accountability remain human responsibilities. Can AI-generated content be trusted as accurate? AI output should always be reviewed. Generative models optimize for plausibility, not correctness, which makes verification essential. What is the difference between a chatbot and an AI agent? A chatbot responds to prompts. An agent can pursue goals, call tools, remember prior steps, and operate across systems. Why do hallucinations happen? Models generate statistically likely responses rather than validated facts, especially when information is incomplete or ambiguous. How do multimodal systems change legal workflows? They allow documents, images, recordings, and other inputs to be analyzed together, reducing manual cross-referencing and speeding early-stage work. What controls should teams put in place? Define review checkpoints, limit agent permissions, assign workflow ownership, and measure outcomes rather than usage. Where should teams start? Begin with low-risk use cases such as summarization or intake, establish review standards, and expand deliberately as governance matures.
Why Human Connection Beats Conference Badges in Legal Tech
At the heart of everything I do is helping people. This isn't just a tagline or a LinkedIn bio flourish. It's a fundamental truth that shapes how I approach every interaction, every conversation, and every opportunity in the legal technology ecosystem. Yet, as I reflect on the countless conferences, summits, and industry gatherings that populate our calendars, I find myself questioning whether we're truly connecting or merely collecting business cards in an increasingly digital age. The Legal Tech Conference Challenge We've all been there: standing in a cavernous convention center, badge dangling from a lanyard, coffee in hand, surrounded by thousands of fellow legal professionals. The energy is palpable, the panels are insightful, the technology demonstrations are impressive. But something essential feels missing. How many times have you returned from a major conference with a stack of business cards, a bag full of vendor swag, and a nagging feeling that despite being surrounded by people for three days, you didn't form a single meaningful connection? The irony isn't lost on me. In an industry built on relationships and trust, we've created environments that often feel more like speed dating than relationship building. The Human First Perspective When I speak about legal technology, I always approach it from a human first perspective, framing discussions around user experience and meeting people where they are. Technology should enhance our ability to connect, not replace the fundamental human elements that make our profession meaningful. Yet, the conference circuit sometimes feels like it's optimized for efficiency over empathy, for quantity over quality. The best conversations I've had, the ones that led to genuine mentorship relationships, collaborative partnerships, and lasting friendships, rarely happened during the scheduled networking hour with 500 other attendees. They happened over a quiet dinner with three colleagues, during an impromptu coffee meeting between sessions, or through follow up conversations weeks after the event ended. The True Cost of Constant Travel Let's address the elephant in the room: the relentless conference schedule that many of us maintain. Weekly flights, hotel rooms that blur together, meals grabbed between sessions. This isn't sustainable, and more importantly, it's not conducive to the deep work and meaningful relationships that drive real innovation in legal tech. Every day spent in transit is a day not spent: Mentoring that law student who reached out seeking guidance Having thoughtful conversations with team members Writing and sharing insights that could help someone navigate their legal tech journey Building genuine relationships within our own communities I've learned that being selective about travel and conferences isn't about missing opportunities. It's about creating space for the connections that matter most. Redefining Connection in Our Industry So how do we move forward? How do we maintain the benefits of industry gatherings while fostering the genuine human connections that make our work meaningful? 1. Quality Over Quantity : Choose conferences that align with your values and where you can contribute meaningfully, rather than attending everything. 2. Create Intimate Spaces : Within large conferences, seek out or create smaller gatherings. Dinners, roundtables, walking meetings. These are where real conversations can happen. 3. Extend the Conversation : The most valuable connections often develop after the conference. Schedule follow up calls, continue discussions over email, and invest in relationships beyond the event. 4. Leverage Technology Thoughtfully : Use virtual meetings and digital tools to maintain connections without constant travel, but remember they supplement, not replace, human interaction. 5. Practice Presence : When you do attend events, be fully present. Put away the phone during conversations, listen actively, and engage authentically. As legal technology continues to evolve at breakneck speed, the importance of human connection becomes more critical, not less. The most sophisticated AI, the most elegant software solution, the most revolutionary platform. None of these matter without people who understand how to implement them thoughtfully, who can translate their benefits in human terms, and who care about the individuals they're meant to serve. My mission remains unchanged: to bridge the gap between the tech world and the legal world. But that bridge is built on human connections, one meaningful conversation at a time. Whether that conversation happens in a conference hall, over a video call, or during a mentoring session with a law student, what matters is that we approach it with intention, empathy, and a genuine desire to help. Because at the end of the day, legal tech isn't about the technology. It's about the people it empowers to work better, live fuller lives, and serve their clients more effectively. And that's a mission worth traveling for, selectively and purposefully, while never losing sight of the human connections that make it all worthwhile. Remember: The best investment you can make in your legal tech journey isn't in another conference ticket. It's in building genuine relationships with the people who will challenge, support, and inspire you along the way.
Intelligence Is All You Need
By Chad Atlas The pace of progress in foundational AI is nothing short of explosive, and legal technology is feeling the impact in real time. In just the past month or so, Google’s latest Gemini models jumped to the top of intelligence leaderboards for the first time. OpenAI released ChatGPT o3 and then o3-pro, their most advanced reasoning model yet to a broader range of users. And Anthropic upgraded Claude from version 3.7 to 4.0. Advances in core AI capabilities are directly reshaping what is possible in legal tech. The speed of progress is now measured in weeks, not months. Chad Atlas This acceleration is not just hype. Recently, prominent legal tech startup Harvey announced it would integrate multiple foundation models from Anthropic and Google, explaining that “general foundation models have improved at baseline legal reasoning” so dramatically that optimization can now focus on task execution rather than baseline reasoning. Their pivot reflects a broader industry reality: the intelligence itself has become so capable that traditional engineering approaches are being rendered obsolete—now handled by the models themselves. For a sense of what this means, I recently tested one of these models on a complex antitrust law exam shared by a professor friend curious about the tech’s capabilities. The model, as graded by the professor, earned at least a B+, possibly an A-, on a question that would challenge all but the most capable law students. Three prompts, less than 15 minutes. (My friend is not easily impressed; in this case, he was.) [1] This validates something I’ve theorized for several years as a CLO and startup advisor (and written about before ): the most significant advances in legal AI come from improvements in the underlying models, not from specialized wrappers or specific legal adaptations. Raw intelligence is what matters—and we are now seeing that play out in real time. Yet most lawyers and legal leaders evaluating tech investments have not realized how quickly this shift is happening. Many are still focused on products, feature lists, and workflow demos, rather than the real driver: the intelligence powering it all. Understanding the Architecture So, what actually powers today’s legal AI? Virtually every tool you see can be modeled as a simple three-layer architecture: Layer 1: Intelligence. The foundation model (ChatGPT, Claude, Gemini) that does the actual reasoning, analysis, and text generation. Layer 2: Engineering. The plumbing, piping, and orchestration that make the AI useful for specific legal tasks,;retrieval systems that fetch documents, laws, and regulations; prompts that guide the AI’s behavior; workflows that chain together multiple steps to execute actual work; and various connections to legal databases or internal document systems. Layer 3: Application. The user interface: a web app, a Word plugin, or whatever makes the system accessible to the end user. That’s it. Intelligence at the bottom, plumbing in the middle, interface on top. The Bitter Lesson: Why General Intelligence Wins Here is the crucial part: that middle layer, the plumbing, has historically existed because the foundational intelligence had clear limits. The AI was like a smart lawyer without any access to law books or a computer. You could ask it questions, and it would respond from vague memories and experience, and that was that. It often made things up or got things wrong. Still, it was smart, so legal tech companies built elaborate engineering around these models to compensate. Retrieval systems searched legal databases or documents and brought back snippets for the AI to review; workflow chains tried to mimic what lawyers do and allow the lonely AI lawyer to execute on projects. But this engineering was always a stopgap. The retrieval systems were primitive and often returned irrelevant results, missed context that matters, or broke down when matters took an unexpected turn. Every additional question to the AI added cost, so vendors cut corners to keep prices down and minimize the engineering and workflow complexity as much as possible. The demos looked slick, but the reality was fragile. This pattern is not unique to law. AI researcher Rich Sutton’s famous essay The Bitter Lesson described this trend precisely: “The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.” Sutton observed that “AI researchers have often tried to build knowledge into their agents, this always helps in the short term, and is personally satisfying to the researcher, but in the long run it plateaus and even inhibits further progress, and breakthrough progress eventually arrives by an opposing approach based on scaling computation.” We have seen this pattern play out in chess, image recognition, language translation and now in legal tech. What is happening now is that models like ChatGPT o3, Claude 4.0, and Google’s Gemini 2.5 Pro are swallowing the stack —meaning the models themselves are now handling much of the retrieval, workflow, and even some application-level reasoning that used to require hand-coded intervention. Leading legal AI companies have quietly pivoted to this reality, focusing less on custom engineering and more on harnessing the raw, ever-improving power of the latest models. The general approach wins, again. The DIY Question: When to Build vs. Buy This creates an interesting dilemma for legal teams: How much should you rely on pre-built solutions versus working directly with the intelligence yourself? My father was a perfectionist handyman who once spent many weekends meticulously working on the wood framing around our windows rather than hiring a contractor. The results were superior, but the time investment was enormous—and one could argue that perfect became the enemy of good. Different people have different value preferences, different skill levels, and different tolerance for complexity. The same dynamic applies to legal AI. If you are a lawyer who can find relevant cases and statutes, extract key facts, and clearly structure problems, you are already better at “wrapping” the intelligence than most hard-coded software solutions. Working directly with advanced models, e.g. prompting, iterating, and fact-checking, often delivers results faster, more transparently, and at lower cost than pricey legal tech platforms. But not everyone wants to be their own handyman. There is legitimate value in platforms that handle infrastructure, route tasks to the best models, and provide peace of mind for privacy or compliance. For a busy legal team, a trustworthy solution can mean less IT overhead, better integration, and more time focused on what matters. The key is understanding what you are paying for. Why Lawyering Is Still Different Law remains different from other domains where AI has achieved breakthrough performance. Lawyers are not just looking for productivity—they need transparency, control, and the ability to interrogate every step. Most lawyers I know, when handed a template, playbook, or automated workflow, want to see exactly what it is, how it works, and adapt it to their context. Accepting someone else’s black box—especially in high-stakes matters—rarely feels right. (Indeed, we have an ethical duty to ensure we reasonably understand the issues and applicable law.) This is why working directly with the most capable models may be optimal for many legal workflows. You retain control, can synthesize the best inputs, and understand the reasoning behind outputs. You can see the model’s work, challenge its conclusions, and iterate until you are satisfied. The interaction becomes a kind of productive intellectual dialogue rather than a passive consumption of pre-packaged results. If you do adopt a legal tech platform or workflow solution, make sure you know what is inside the box: whose judgment are you trusting? Whose templates, whose playbooks, whose risk tolerances? The transparency question is not just about understanding the technology—it is about understanding whose legal judgment is baked into the system. Proof in Practice: Where Wrappers Matter—and Where They Don’t Some wrappers and orchestration layers genuinely add value, especially as intelligence gets cheaper and more accessible. Software engineers have flocked to tools like Cursor, which offers a code editor built around AI. It isn’t just about the underlying model’s intelligence—Cursor’s workflow, search, and integration features make it easier for users to harness that intelligence effectively. OpenAI’s recent acquisition of such a development tool, Windsurf, for $3 billion , suggests that intelligence wrappers have real value. These “application layers” matter when they truly enable new kinds of productivity and collaboration. The same principle applies in legal. For example, Harvey’s shift toward task execution rather than baseline reasoning represents an intelligent adaptation to the new reality. There is real value in systems that understand legal context, maintain proper citation formats, integrate with existing workflows, and handle the mundane but critical details that practicing lawyers need. (Bureaucracy solutions in a box; yes, please.) But the bar for genuine value is getting higher. Templates, basic prompt libraries, and simple workflow automation have limited long-term differentiation when the underlying intelligence can handle these tasks directly. The Real Evaluation Question When evaluating a legal AI solution, ask yourself: what am I really paying for? Is there genuine value-add I cannot replicate? Or am I paying for packaging around the same core intelligence I could access directly? Consider these criteria: ● Integration complexity: Does the solution handle genuinely difficult technical integration, or could you achieve similar results with direct access? ● Legal domain expertise: Are the prompts, workflows, and guardrails meaningfully better than what you could develop yourself? ● Transparency and control: Can you understand and modify the system’s behavior, or are you locked into someone else’s judgment? ● Cost structure: Are you paying a reasonable premium for convenience, or multiples above the underlying intelligence cost? The question isn’t whether wrappers are good or bad—it is whether you understand where the value comes from, what tradeoffs you are making, and how much transparency or control you're willing to give up for convenience. Looking Forward Rich Sutton’s “bitter lesson” from AI research is clear: as intelligence gets stronger and cheaper, custom engineering layers get swallowed up. But that does not mean all wrappers disappear only that the bar for real value is getting higher. Legal tech companies that recognize this reality, like Harvey pivoting to task execution, are positioning themselves to add genuine value rather than just repackaging commodity intelligence. For legal teams, this means being more sophisticated about what you are buying. The most successful legal departments will likely combine direct use of frontier models for complex reasoning with specialized tools for specific workflows where the wrapper genuinely adds value. Intelligence is all you need, provided you know how to wield it and critically assess the value of everything built on top. About the Author Chad Atlas is Chief Legal & Ethics Officer at an AI-first fintech startup and advisor to early- to late-stage biotech companies. He has over two decades of legal experience spanning federal clerkships, BigLaw litigation, and executive leadership roles at a clinical-stage biotech company. His philosophy and computer science background from Duke initially fueled his interest in the intersection of law and emerging technology. He recently launched No Vehicles in the Park , where he writes about legal AI, professional judgment, and the evolving legal landscape. [1] Since submitting this article for publication, professors at the University of Maryland Law School released a paper stating that o3 (the same model I used) got three A+s, one A-, two B+s, and a B on exams they tested it on. Link to the paper here .
Evaluating GenAI Solutions: What You Need to Know
Over the past two years, there has been a significant interest in adopting generative AI tools in the legal industry. However, many organizations continue to face challenges in understanding how to procure, implement, and maximize the potential of these tools. This article seeks to address this gap by offering key insights and questions to consider when assessing GenAI solutions. Key areas covered include: Foundational Models : Understanding the underlying large language models (LLMs) used by providers is essential. Different models have varying strengths and weaknesses, and continuous evaluation is necessary to ensure optimal performance. Data Security : Ensuring that providers have robust security processes and certifications is vital. Questions about data storage, access controls, and third-party service providers should be addressed to maintain data integrity and security. Accuracy and Reliability : Evaluating the accuracy of GenAI tools is critical. Providers should provide benchmarks, case studies, and details on how they handle model drift and degradation. High accuracy translates to higher reliability and consistent performance. By addressing these key areas, organizations can make informed decisions and successfully integrate GenAI tools into their operation. When selecting a GenAI solution, the first step is having a clearly defined use case. AI models vary in their capabilities, strengths, and weaknesses, so understanding what you need the AI to accomplish ensures that you evaluate solutions effectively and choose one that aligns with your business goals. Foundational Models. The foundational model of a Large Language Model (LLM) is critical to get right because it serves as the core engine that determines the capabilities, limitations, and overall effectiveness of a generative AI solution. Below are listed some key questions you should ask GenAI legal tech providers when evaluating potential options. Check if you’ll be locked in by the provider’s choice of model. This restricts your flexibility to swap between different foundational models should a better performing one or one that matches your needs more closely emerges. What foundational model does the provider use? Understand the underlying large language model(s) that provides insights into the tool’s capabilities and potential limitations. Different foundational models such as OpenAI’s GPT, Google’s Gemini, Meta’s Llama, and Anthropic’s Claude vary in terms of architecture, training data, and optimization strategies. The choice of model impacts accuracy, fluency, bias, contextual awareness, and multimodal capabilities (e.g., handling text, images, and code). How do they ensure they are using the best performing LLM? The landscape of LLMs is rapidly evolving. Providers should have a robust process for continuous model evaluation to ensure they are leveraging the most effective model available. A strong evaluation framework should incorporate standardized performance benchmarks like MMLU (Massive Multitask Language Understanding), SuperGLUE, and HELM (Holistic Evaluation of Language Models) to measure the model’s accuracy, reasoning ability, and bias levels. Additionally, providers should conduct domain-specific testing if the AI is being used in industries like healthcare, finance or legal applications, ensuring the model meets the necessary precision and any relevant compliance standards. How often are the models updated and retrained? Regular updates and retraining are crucial for maintaining a LLM that remains accurate, relevant, and aligned with evolving knowledge base and end user needs. Models can quickly become outdated as new facts, regulations, and industry trends emerge, making it essential for providers to have a structured retraining and updating cycle. Buyers should inquire about the frequency and methodology of these updates to ensure the model is continuously improving. Some providers update their models on a fixed schedule, such as quarterly or annually, while others use a rolling update approach, where models are incrementally retrained with new data as it becomes available. Can you inject your own data into the pre-existing LLM to fine-tune the results? Customization may be necessary to align the LLM with specific business needs, industry requirements, or proprietary knowledge. The ability to fine-tune a pre-existing LLM using your own data can significantly enhance its relevance, accuracy, and effectiveness for specialized applications. Organizations should assess whether the provider supports fine-tuning, embedding domain-specific knowledge, or integrating external databases to tailor responses. Data security. While the GenAI space is moving quickly, providers need to ensure that they're keeping your data secure at all times and they have the correct protocols in place to deal with any potential breaches. As part of your evaluation, you'll need to be satisfied that the provider has the right security processes and certifications in place. Depending on your use case and the solution being considered, the key questions to ask the provider are listed below. Does the provider have security certifications?Security certifications are a crucial indicator of an AI provider’s commitment to data protection, data privacy, and compliance with industry standards. Buyers should look for recognized security frameworks such as ISO 27001, which ensures a robust information security management system, or SOC 2 (Service Organization Control 2), which evaluates how well a provider safeguards customer data in terms of security, availability, processing integrity, confidentiality, and privacy. Some startups might not have the right certifications in place. In that case, request penetration test results and ask how often testing, both internal and external, is carried out. Where will your data be stored or hosted?Understanding where your data is stored and processed is critical for ensuring compliance with data residency, security, and regulatory requirements. Buyers should verify whether the AI provider offers flexible hosting options, such as on-premise deployment, private cloud, hybrid cloud, or specific regional data centers, to align with their internal policies and legal obligations. What are the access controls and authentication options?Robust access controls and authentication mechanisms are essential to ensure that only authorized personnel can interact with AI systems, particularly when dealing with sensitive data, proprietary knowledge, or regulated industries. Buyers should evaluate whether the provider offers Role-Based Access Control (RBAC), which allows administrators to restrict access based on job function, seniority, department, or geographic location. For example, executives may have full system access, while frontline employees may have read-only permissions, and IT administrators may have advanced configuration rights.Does the provider rely on third party service providers to deliver their service?In most cases, AI providers rely on third-party service providers for various aspects of their infrastructure, including cloud hosting, data storage, API integrations, and security. It’s important to understand who these third parties are, what role they play, and how they handle your data to ensure compliance with security and privacy requirements. Additionally, businesses should clarify if any subcontractors have access to sensitive or proprietary information and what measures are in place to prevent data misuse. Accuracy and Reliability When evaluating generative ai tools, understanding the accuracy of the model is crucial. The quality of the output is directly dependant on the accuracy of the model. High accuracy translates to higher reliability. Reliability means the solution consistently provides accurate and dependable results across various scenarios and over time. What metrics do you use to measure the accuracy of your models?When evaluating an AI provider, it’s crucial to understand how they measure model accuracy and which metrics they prioritize in relation to your specific use case. Common benchmarks include Perplexity (PPL) for predictive accuracy, BLEU and ROUGE for translation and summarization, Exact Match (EM) and F1 Score for classification and retrieval tasks, and TruthfulQA/FEVER for factual accuracy. Note that most benchmarks have some limitations. Ask about false positive rates and whether accuracy can be fine-tuned for industry-specific needs. Additionally, assess if and how the provider monitors real-world performance through human-in-the-loop validation, A/B testing, and live feedback loops to ensure ongoing improvements. What processes are in place to monitor and maintain the model's accuracy over time? Over time, LLMs can experience model drift and degradation, where their responses become less accurate, biased, or misaligned with current data trends. This happens because language evolves, facts change, and business needs shift. To ensure long-term reliability, ask the provider what monitoring and maintenance strategies they use to track, evaluate, and update the model’s performance. Without proper monitoring and maintenance, AI models can become outdated and unreliable. Provider that implements proactive tracking, continuous fine-tuning, and conduct real-world performance evaluations ensure that the model remains accurate, unbiased, and aligned with evolving business needs. Can the vendor provide details on the performance of their solution in real-world scenarios?Evaluating an AI provider based on real-world performance is essential to understanding how their solution functions beyond controlled environments and benchmark tests. Ask the provider for case studies, references, and deployment examples that demonstrate how their solution performs in organizations of similar size, industry, and complexity as yours. How do you evaluate the solution’s performance on new data?This question suggests an educated buyer who is thinking beyond their current use case and where and how to deploy the solution more widely. For businesses looking to scale adoption across multiple use cases, the solution must seamlessly handle evolving datasets without frequent or laborious manual intervention. Providers with robust evaluation strategies, automated monitoring, and lightweight adaptation options ensure that the AI remains accurate, adaptable, and future proof, reducing the need for constant retraining while continually maintaining high performance. Generative AI tools offer immense potential for organizations ready to harness their power. By clearly defining use cases, understanding foundational models, ensuring robust data security, and evaluating accuracy and reliability, businesses can make smart, informed decisions. Staying proactive and informed will be key to leveraging these advanced technologies effectively and avoiding the dreaded Shiny New Toy Syndrome. Sharan Kaur – Go-To-Market (GTM) Expert | Legal Tech Strategist | Growth Leader Sharan Kaur is a seasoned growth and sales leader with a proven track record of designing and executing global go-to-market (GTM) strategies for startups, scaleups, and legal tech providers. With a background as a corporate litigation lawyer and an Executive MBA, Sharan specializes in driving revenue growth, leading high-performance teams, and implementing scalable solutions for long-term success. Her expertise lies in managing full sales cycles, building strategic partnerships, and consulting post-deployment to ensure maximum value realization. Sharan works closely with law firms, corporate legal teams, and legal tech providers to deliver custom solutions, optimize workflows, and enhance user adoption of innovative technologies. Currently, as a Digital Transformation Consultant at Legal Solutions Consulting, Sharan bridges the gap between legal teams and generative AI solutions, ensuring seamless adoption and long-term value realization. Her cross-functional leadership experience and deep understanding of legal technology adoption make her a trusted advisor for businesses seeking sustainable growth and operational excellence.
Overruled by Algorithms: Embracing AI in Legal Practice
We are now at the stage of the AI revolution where even those of us living under a proverbial rock have heard of generative AI. The idea that a particular release version of a large language model could be a major global media event was a laughable idea until very recently, but AI has moved from research labs to our daily lives with remarkable speed. In the legal space, we are constantly bombarded with marketing campaigns about productivity gains and improvements in the quality of our work. The promises vary, but include claims to supercharge this or that, to make a particular task instant or effortless, or to deliver game-changing efficiency gains across the board. This may all sound very odd coming from a person who has just joined a generative AI startup, but please bear with me! I was called to the Bar almost twenty years ago. Ten years ago, I drifted away from private practice and, after the traditional mid-life crisis, towards technology. Like many in the space, tech represents the intersection of my deep professional skillset and a slight tendency towards neo-mania, with a twist of gadgetry obsession. I have lost count of the number of shiny gadgets that I have loved (and then consigned to landfill, sorry Mother Earth) over the years. While the urge to buy soon-to-be obsolescent electronic junk has faded as I age, my love of legal tech has only spiraled, and—to be frank—is now beyond all semblance of control. [1] I first thought that legal tech would also have the beneficial side effect of allowing me to do less work and make more money, but since I drifted away from the wig and gown, I have been disabused of the rather naïve notion that this is a simple matter. [2] Along my journey, I have been fortunate enough to have been involved in global transformation projects alongside some of the biggest legal teams in the world. While I am not yet as grizzled as some of the towering figures in the ops and transformation space, I am at least tipping into the category of those who have their share of war stories, and when Colin asked me to share what he very generously described as “insights” or “thoughts”, I jumped at the chance. Thank you, Colin. Sidenote: if you want to see what generosity looks like, follow Colin on LinkedIn, or better yet, catch him at an event. So, after a fabulously verbose introduction, the question we’ve all been asking: what on earth is going on, and what on earth can I actually DO? What Does AI Eat For Breakfast? We’ve all heard the phrase “culture eats strategy for breakfast”. Sometimes it’s said by someone looking for an excuse for lackluster planning, sometimes by a charismatic narcissist looking to paper over the behavioral cracks in their organization through which toxicity is seeping and oozing, and much more rarely, it’s said by a true leader. [3] But if culture eats strategy for breakfast, what does AI eat? In my opinion, there is a non-zero chance that AI will eat culture. The technology is just so incredible that I believe that there is a non-zero chance that—absent a major or indeed global EU-style interventionist push—our existing culture will be eradicated. I mean that in the broadest possible, non-corporate sense. This is not the cloud revolution, changing enterprise SaaS purchasing habits, and creating a new segment in an existing market. This is something else entirely. It will impact every single facet of life, and has the potential to completely up-end the assumptions on which we have built our professional world. To avoid worsening my chronic lack of brevity, I will deliberately leave this wide-ranging discussion, the possibility of AGI, the potential for an AI singularity, and other such topics for another time, potentially when I have a drink in hand. My focus here, then, will be on legal culture exclusively. But I’m supposed to be talking about AI strategy , so why am I starting with culture ? The extent to which we are merely products of our environment is debatable, but AI strategy considerations—and indeed any strategy considerations—should in my view start with an analysis of culture . Corporate culture is a vast topic, which I am probably under-qualified to discuss. Legal culture is a little closer to my wheelhouse, but even that I would prefer to leave to the experts. Check out Charlotte Smith and her writing, for example. However, I have been asked to address the issue of AI in legal teams on sufficiently numerous occasions that I will accept if not the label of “expert” then at least the label of “non-moron”. Lawyers are not famously convivial or congenial. They are a very more diverse group than stereotypes allow, but qualifying as a lawyer is to the right-hand side of the bell curve in terms of difficulty in most jurisdictions, and social status is similarly located. Type-A personalities abound, and parts of Big Law culture can be grind-centered. I don’t want to toss the baby out with the bathwater, as I have a deep love for the profession, and I am very proud of the time I spent prosecuting. I should also note that I am no less proud of the time I spent representing wealthy corporations and nation states in international arbitrations. The law has been my passion for a long time, and I like to think I can make a case for any aspect of it. Except trusts and estates. You folks are just straight up weird. So, to return to our generalizations, we have a bunch of smart, driven, socially and professionally conservative knowledge professionals, faced with a novel technology that has a substantial chance of being better than all humans at reasoning and arguing on a timeline measured in months or maybe years, not decades. What should we expect? Fear. Lots of fear. When I show teams what AI can do, I make a joke that it takes thirty minutes to configure and deploy the instance, thirty days for associates to get up to speed, and then three years of therapy for the partners to get over it. They’ve just watched AI chew through ten thousand pages of loan agreements in ten minutes, smashing out verifiable data, and building an excellent first draft due diligence report. What do they see? Oftentimes, they just see billables evaporating. Getting to the top is hard. Really hard. When you get there, it’s your turn to get rich, frankly. You’ve earned your time in the sun, it’s your turn to hold a bucket underneath the money faucet, and you probably don’t want to hear that “ everything’s different now, Jim ”. When their turn comes, some people tip into more extreme immobilism, while some just become a shade more conservative. It takes a very special and courageous person indeed to see something new, recognize the potential, and immediately set to work on cannibalizing the business they just spent thirty years building. They are putting down the bucket and going to look for a way to tap the water main. There are too many to list, but early adopters of legal AI who are spending their very hard-earned cash are my professional heroes — even if they spend it with my competition! Strategy Building I’ve said this phrase, or a variation thereof, to rooms full of GCs or law firm partners more times than I can count: “ Who you are determines where you are. ” There’s a bit of elaboration, but that’s the punchline. Going on vacation? Which destination? Why? What do you want to do? What do you prefer? Who do you like to hang out with? Noisy bars or quiet cafés? I got a friend who likes horse-trekking vacations in Mongolia. Another who wants only goes to party destinations. Personally, I want a resort with nice weather and good food, a moderate distance from my house, fun stuff for the kids a must. We are different people and that determines where we end up. Assuming you’re the leader, there’s hopefully a fit between who you are as a person, your values, and those of the organization you serve. If not, you probably don’t need to be told to dust off the CV and move as soon as you can. In the legal industry, this is the “type” of team you are. White-shoe firm with impeccable credentials and terrifying rates? Aggressive personal injury team with massive billboards? Dedicated local courthouse solo-warrior who hung her shingle in ’76 and never looked back? In-house teams have the same question. Are you building for speed? Industrializing standard paper in a B2C industry with a website click-wrap agreement? Negotiating three agreements a year in a complex geopolitical context? Struggling to maintain alignment in your database of one million product codes? Somewhere in between? You can’t hope to nail AI strategy if you haven’t got this straight. If you can point to a written and maintained version of your company’s values, a written legal team mission statement, and you can see tangible evidence of them in your daily professional life, that’s a good sign. If you need help with this, I’ll declare a conflict of interest and recommend that you call Emilie Calame, my former boss (and long-suffering wife). Once you’ve figured out who you are, and you’re aligned with the organization that you serve, you need a destination. In-house teams may well have some clearly communicated corporate targets, OKRs and so on. Private practice targets tend to be equally clear and mainly financial. Bring that down to a more granular level and build a team target. Then, take a look around. What’s your team like? How big is it? Average age? Willingness to experience discomfort? Adaptability to change? Moving a fifty-lawyer team isn’t the same ask as moving a three-lawyer team. Second part of the stock take: what process and tech do we already have? Microsoft org? Check for Power Automate licenses. Notion teams can build some pretty slick stuff with middleware (Zapier, Make, Airtable, Bubble, etc.). Values, alignment, team status, destination, available resources for the journey. Now comes the fun bit. What do you need to do, to get where you want to go? For some people, AI is a solution looking for a problem. Don’t be that person. If your problems include handling large volumes of unstructured data, extraction, analysis, markups, contract negotiation, document comparisons and so on, then there is undoubtedly a game-changing level of leverage an LLM away. Step one, then: identify your current tasks. What are you doing on a regular basis? What do those tasks look, in detail? How many person-hours a month, which colleagues handle them? Fair warning: this is much, much harder than it sounds. Once you have a clear view of processes, run the “Five Whys” and dig into the underlying motivation. See if there’s something you can ditch, or handle in another way, or just optimize out of existence. Example: contract negotiations. Revisit your standard paper, do some deal post-mortems, and if you’re drowning in red ink on each deal make sure you know why, and then try to eliminate the problem. Eliminate the task if possible, optimize the survivors, then try to use technology to automate all of the optimized flows you have left. In that order. Step two: among the tasks you’ve identified as rock-solid must-haves in your work life across the next year or so, which of them involve large volumes of data, repetitive work, similar documents, and other such “AI-tells”? A good rule of thumb is “AI is not for everything”. Again, this might seem strange from an AI-evangelizing professional from an AI company who has tied his career to the future of this technology, but I hope that you can see that is something to take seriously. If your organization needs to push tens of thousands of SKUs into order forms from a CRM, and then run this data into an ERP system, performing complex math along the way, you probably want some integrations that pipe the data end-to-end, and not the probabilistic miracle machine that is an LLM. If, on the other hand, you have an increasing volume of work that requires you work with large volumes of unstructured data, analyzing, extracting and generating written work, your life is going to change very quickly. Step three: preparation. Let’s assume that you’ve identified a process that is unavoidable, optimized, and not automatable. In an in-house team, this might be contract negotiation, IP licence audit, or an employment contract audit. In a law firm this might be anything from an an M&A review, first pass reviews of written submissions from the other side, reviewing docs from a client, or even client onboarding. Figure out how much this process is costing you. Anything under 10k a year is very unlikely to be on your radar. Anything under 100k might not be worth it. Touching business critical processes means risk, and it means decision-maker time. Price the risk as best you can, price the time it takes for your leaders to provide input, and then add the cost of change. Lost productivity, the valley of despair, and the cost of the tool itself (plus assistance with the change) will all add up very quickly. Get this enormous number to the front of your mind and add a safety margin. Did you include project planning and general bandwidth? What about the extra business this shiny new toy might bring in? How safe are we feeling about these numbers? Step four: project launch. We have identified something that AI can do, the fixing of which will generate not only a positive ROI, but one so large that once all other factors are taken into consideration, it meets the IRR criteria required by whichever body makes such decisions. The decision is made, your organization commits to action, and the fun begins! Scoping, RFIs, RFPs, POCs, pilot phases, roll outs and more. If you’ve got this far, building a culture of continual improvement shouldn’t be too taxing, so let’s say Step Four rolls into eternity… Business As Usual? These basics should give you a framework that allows you to identify opportunities in a given team at a given time, but it’s very much a view from the trenches. In my opinion, this kind of work can only deliver exceptional results when it is performed by someone with an excellent grasp of the larger strategic dynamics at play in the market. What’s the 30,000 foot view? The nature of innovation is that it is not at all “business as usual”. It is a technology that changes things to such an extent that entire sections of the economy will die out. The whale oil industry, horse-drawn cabs, coal mining, human translators. They were all replaced in large part by machines. Are lawyers next? No. Caveat: the market for legal services baffles me. I see constant rate rises, above inflation increases across the board, almost constant complaining from in-house teams, ever-increasing numbers of lawyers qualifying to practice, and ever-increasing volumes of work being performed. Supply and demand don’t seem to be particularly well-correlated, in other words. To make matters better/worse depending on whether you’re buying or selling, buyers appear to be almost entirely insensitive to price. Two-and-a-half predictions then. The half prediction is that given the economic incentives and social status involved, I don’t see the pull of the profession diminishing soon. That will help drive the first “real” prediction. Lawyers won’t disappear. More lawyers, better tooling, bigger addressable market. Jevon’s Paradox will play out in full, and as efficiency rises, overall revenues that flow to the sector will increase, not decrease. Second “real” prediction. In no team does the advent of AI means business as usual. It is orders of magnitude better at very particular tasks, and this comfort zone is expanding rapidly. I believe that in some industries, a massive proportion of tasks are already well within the AI comfort zone. I include the legal industry here. Over time, what we traditionally considered to tasks reserved for human experts will be taken by AI. Economics would suggest that the higher the proportion of such tasks in an industry, and the higher their cost to the wider economy, the more capital will rush to address this. My economic ignorance aside, didn’t a legal AI team out of the US just raise 300 million bucks? Leaders in such industries have less time than anyone else to react. In fact, reacting is not enough. They must anticipate. This involves risk, and fortunately for us lawyers, we excel at risk analysis. What does this mean in very real terms? Get your hands dirty. Understand what an LLM is, what it does, what it can’t do. Understand what training data is, and what fine-tuning can and can’t do for model performance. Test things. Get an idea of what compromises might be made in the deployment of a system. Speed? Security? User-friendly interface? Make smart supply chain decisions. Is your vendor model agnostic? US-based? What about open source? What about self-hosting? The answers to these questions vary across time and across industries. I don’t have the answers for you, but just asking the questions will put you to the right-hand side of the bell curve when it comes to AI strategy. Good luck. P.S. — as a bonus, here is a list of what my priorities would be as a GC or law firm partner: GCs: 1. Immediately draft AI use policy for employees 2. Draft an AI supplier policy, particular eye on data reuse 3. Go for low-hanging fruit (unstructured contract data, most likely) 4. Re-invest every minute saved in more AI work. Goal: a lean department that does nothing but strategize and handle BAU outliers. Law Firm Partners: 1. AI vulnerability assessment: which parts of my practice are AI-tractable, and which are billed on a time basis? Which part of the market am I in? Mass-market, leave ASAP. You are fungible. Pick a specialism and differentiate if possible. Mid-market? AI price competition will be painful. Find a segment and see if you can build a super-reliable money-spinning use case where you can secure first-mover advantage. Use the extra money to try to keep your lead/move up-market. Elite? Secure the leading specialists. Knowledge capitalization is coming fast. If your model is sufficiently agile, fight harder for fewer elite hires. 2. Invest in AI, reinvest each hour saved in improving the things clients love most: accessibility, face time, updates, those spontaneous meetings or lunches that overrun but where you figure out really important stuff together. 3. In parallel, consider pushing the parts of my work that are AI-tractable towards fixed fees. Goal: a firm with impeccable credentials and brand, top-tier work, “cash-cow” AI-powered workstreams, massive margins, unrivalled client care, strong BD and growth. Jonathan Williams is a recovering litigator and arbitration practitioner. He began his career as a prosecutor before switching to international arbitration and moving to Paris. A decade ago, he slid across into technology, where he has developed a deep expertise in solving the biggest problems for the biggest legal teams. Innovation, technology, change management, strategic advisory work and generative AI have taken up a substantial portion of his professional life. The last five years he has spent working for Calame, the advisory team founded by his wife, Emilie Calame. He joined Legora at the end of 2024, where he is heading up their operations in France. He is based in Paris but travels extensively. [1] By the way, that’s a deliberate em-dash, not an OpenAI one. I’m probably not using them correctly, but it is me that’s using them. [2] As you may be able to tell, I have not, however, been disabused of the notion that sentence length or the excessive use of subordinate clauses is a proxy for intelligence. See also, excessive use of footnotes. Lawyers love footnotes. [3] The ratio I’ve encountered in my career thus far is approximately 70:29:1. Better than hearing a leader say it, is watching a leader embody it. I’ve been lucky enough to see this at Calame and with my current employer.
The Shape of Things to Come (Our Fearless Prediction)
In an upcoming article , we consider how much downward pressure GenAI will exert on the use of billable hours as a proxy for value. If technology can handle adeptly the more mundane tasks in a lawyer’s day, why will clients be inclined to pay humans to take more time to do the same work? And if clients refuse to pay for humans to handle routine work, what will that mean to a law firm’s bottom line? We’ve heard all of the fears about GenAI’s use in the practice of law. Some of those fears are important to resolve—such as confidentiality, bias, reliability—and some are less so, as one of us has pointed out . Sure, even the best technology is error-prone, but so are humans. [1] And computers, unlike humans, don’t get bored, tired, or inconsistent when performing repetitive tasks. Imagine a world in which a law firm has figured out a way to use GenAI to do simple tasks quickly and well. If ChatGPT can pass a bar exam , then any well-designed GenAI program can prepare a credible first draft of a pro hac vice motion; it can analyze a contract ; it can draft a brief . [2] Even way back in 2018, LawGeex demonstrated the superior work of its technology as compared to humans, in issue-spotting clauses in NDAs . And GenAI keeps improving at a breathtaking pace. So consider different strategic paths taken by two hypothetical firms, imaginatively named Law Firm A and Law Firm B. Law Firm A has decided that there are certain tasks that its lawyers do that can be given to GenAI first, with the lawyers then reviewing the resulting draft. The good news is that these lawyers can now be deployed to do tasks that GenAI can’t do—their time has been freed up for that more interesting work. The bad news is that the work that used to be billed out by the hour is completed in seconds through automation. Isn’t that bad for the law firm? We don’t think so. Now Law Firm A has two types of income streams: a commoditized income stream generated first by GenAI and then revised by humans, and a more bespoke income stream for things that only lawyers can do. (In our article, we suggest that the bespoke work, if it’s billed by the hour, can now command a higher billable rate than before, because experts are focusing on the tricky, novel issues.) With its freed-up time, in addition to working on bespoke matters, Law Firm A can find ways to bring junior lawyers up to speed the way that both of us were trained: by watching and learning from more senior lawyers in real time. Its commoditized work will likely be monitored by senior associates, with bespoke work done by both senior associates and by partners. In a world in which clients don’t want to pay for first- and second-year lawyers to be trained “on their dime,” the firm can now afford to devote more in-depth mentoring to keep itself sustainable. What about Law Firm B, which eschews GenAI and insists that all but the most mundane work has to be done by humans, for quality control reasons? We think that Law Firm B runs the risk of becoming obsolete. When a Law Firm B client can do a first draft of something internally with or without the assistance of GenAI, why would it want to pay for Law Firm B’s junior associates to take time doing the same first draft? Law Firm B may find itself losing clients to Law Firm A, which is handling client matters more efficiently. It may also find itself losing associates to Law Firm A. Our article contemplates a world in which the pyramid model, built on the premise of many junior lawyers doing billable work, may disappear in light of a more efficiently shaped economic model. Maybe that model is more of a cylinder, streamlined to use fewer lawyers in total because some of its junior ranks have been replaced with GenAI. Maybe the model morphs into a diamond, with more senior associates and fewer partners and junior associates. Or maybe the pyramid becomes a starfish, with a core of central support and different “arms” using GenAI either more or less, depending on the type of practice. There are many possible shapes, but we believe that the pyramid, as we have known it for decades, will be the least sustainable for most practices. Ultimately, law firms will have to grapple with the idea that the billable hour is not value but just a mere proxy for value. We believe that the firms that find a better way to capture the value-add of humans to drafts initially produced by GenAI will be the firms that survive and thrive. [1] This point is where the other one of us wants to refer you to the speech in Top Gun: Maverick about drones taking over for test pilots . [2] We’re just citing to some of the great programs out there. There are many such great programs. Nancy B. Rapoport is a UNLV Distinguished Professor, the Garman Turner Gordon Professor of Law at the William S. Boyd School of Law, University of Nevada, Las Vegas, and an Affiliate Professor of Business Law and Ethics in the Lee Business School at UNLV. After receiving her B.A., summa cum laude, from Rice University in 1982 and her J.D. from Stanford Law School in 1985, she clerked for the Honorable Joseph T. Sneed III on the United States Court of Appeals for the Ninth Circuit and then practiced law (primarily bankruptcy law) with Morrison & Foerster in San Francisco from 1986-1991. She started her academic career at The Ohio State University College of Law, served in three deanships, one stint as Acting Provost (UNLV), one stint as Acting CFO (also UNLV), and one stint as Special Counsel to the President of UNLV. In 2022, UNLV’s Alumni Association named her the Outstanding Faculty Member of the Year. Boyd law students have honored her three times: she tied (with Professor Jean Sternlight) for “Faculty Member of the Year” in 2024; she was named “Faculty Member of the Year” (and faculty commencement speaker) in 2021; and she was named “Dean of the Year” by Boyd law students in 2013. Her specialties are bankruptcy ethics, ethics in governance, law firm behavior, artificial intelligence and the law, and the depiction of lawyers in popular culture. She has served as the Secretary of the Board of Directors of the National Museum of Organized Crime and Law Enforcement (the Mob Museum) and currently serves as a Trustee of Claremont Graduate University and the Chair of its Audit and Risk Management Committee. She is also a Fellow of the American Bar Foundation and a Fellow of the American College of Bankruptcy. In 2017, she received the Commercial Law League of America’s Lawrence P. King Award for Excellence in Bankruptcy, and in 2018, she was one of the recipients of the NAACP Legacy Builder Awards (Las Vegas Branch #1111). She has served as the fee examiner or as chair of the fee review committee in such large bankruptcy cases as Zetta Jet, Toys R Us, Caesars, Station Casinos, Pilgrim’s Pride, and Mirant. She is serving as the President of UNLV’s Chapter 100 of Phi Kappa Phi from 2024-2025. Joseph R. Tiano Jr., Esq. is Founder and Chief Executive Officer at Legal Decoder. After practicing law for nearly 20 years, Joe founded Legal Decoder because he saw that clients lacked the analytic tools and data to effectively price and manage the cost of legal services delivered by outside counsel. Joe set out to build an intelligent, data driven technology company that would revolutionize the way that legal services from outside counsel are priced and economically evaluated. Legal Decoder’s data analytics technology is used in law firms of all sizes from AmLaw 50 law firms to boutique firms; Fortune 500 legal departments and in major Chapter 11 bankruptcy cases (PG&E, Purdue Pharma, Toys R Us and others). Joe is a prolific author having (co-)authored nine law review articles published in scholarly journals. In addition, he has written articles for countless blogs and other online media on substantive legal issues and the legal industry in general. He regularly presents at CLEs and other seminars and courses on topics ranging from artificial intelligence, LegalTech, legal data analytics to legal ethics and legal malpractice. He is also an Adjunct Professor of Law at the Arizona State University Sandra Day O'Connor Law School. Previously, Joe was a Partner at Pillsbury Winthrop Shaw Pittman, LLP and Thelen LLP where he grew and managed all aspects of a multi-million-dollar cross-border finance practice. Entrepreneurship runs through Joe’s veins since his early days as a venture capital lawyer representing transformative technology companies, like Blackboard Inc., and many of the outgrowths of Blackboard (WeddingWire/The Knot, Presidium, Starfish Retention Solutions and others). Joe graduated from Georgetown University in 1992 with a Bachelor of Science Degree in Business Administration and received his J.D. from the University of Pittsburgh School of Law in 1995. Joe is a native Washingtonian who currently lives in Scottsdale, Arizona with his wife, Meredith, and their two boys, Gabriel and John-Paul. During the rare moments when he is not working, Joe can be found taking his sons on hikes, watching their extracurricular activities and helping Meredith implement her design creations.
Meet Your New (Legal) Associate: Tireless, Proactive, and Terrible at Office Politics
Part 1: Understanding the Basics Imagine walking into your office on a Monday morning, coffee in hand, to find that while you were away, a new colleague has been quietly revolutionizing how work gets done. This colleague never sleeps, never complains about the office temperature, and has processed more documents than your entire team typically handles in a month. Welcome to the world of AI agents - autonomous systems that represent the next evolution in artificial intelligence technology. To understand why AI agents matter, we need to first understand how they differ from the AI tools you might already be familiar with. Traditional AI systems, often called "narrow AI," are like highly specialized consultants - they excel at specific tasks but stay strictly within their defined boundaries. Think of them as the office specialists: one handles document review, another manages calendar scheduling, and a third might focus on data analysis. AI agents are more like proactive general managers. They can understand high-level goals, break them down into smaller tasks, and autonomously work toward meeting those goals. This might sound convenient - and it often is - but it also introduces new complexities and challenges we need to understand. Part 2: The Technical Foundation Traditional AI systems often treat each interaction as a fresh start - imagine having to reintroduce yourself to a colleague every morning. AI agents, however, use sophisticated memory architectures called "chunking and chaining." This system lets them maintain context across interactions and connect related pieces of information. The practical implications of this memory system include: Maintaining conversation context across multiple sessions Building understanding of ongoing projects and relationships Learning from past interactions to improve future performance Creating connections between seemingly unrelated pieces of information If memory systems are the foundation, entitlement frameworks are the guardrails that keep AI agents operating within boundaries. This is crucial because AI agents are designed to take initiative and act autonomously. However, recent experiments have shown these systems might interpret their goals in unexpected ways. The third important part is the ability to interact with various software tools and systems. Modern AI agents can connect with multiple platforms simultaneously, letting them coordinate complex actions across different systems. This capability makes them powerful but also increases the potential for unexpected behavior. Part 3: Real-World Applications and Their Implications In legal practice, AI agents are showing capabilities that go far beyond traditional document review systems. While earlier AI tools could search for specific terms or clauses, modern AI agents can understand complex legal concepts in context and make sophisticated connections across entire document collections. S Consider how an experienced attorney reviews a contract. They don't just identify standard clauses; they understand how different provisions interact, spot potential conflicts with existing agreements, and recognize implications for various business scenarios. Modern AI agents are demonstrating similar capabilities. For example, when reviewing a merger agreement, an agent might: Understanding Context and Implications: Identify change-of-control provisions and understand their implications across the entire contract portfolio Recognize potential conflicts with existing agreements across multiple jurisdictions Flag unusual terms that, while technically valid, might create unexpected risks in specific business contexts Cross-Document Analysis: Connect related information across thousands of documents to find patterns and potential issues Maintain awareness of how changes in one document might affect interpretations of others Track the evolution of legal positions across multiple drafts and negotiations However, this sophisticated analysis comes with important exceptions. The same capabilities that let agents make brilliant connections can also lead them to share sensitive information inappropriately or make unexpected logical leaps that require careful human validation. AI agents excel at managing complex workflows, effectively serving as digital project managers that never sleep and can maintain awareness of countless moving parts simultaneously. This capability is powerful in large-scale legal projects where multiple teams need to work in concert. Consider a major corporate acquisition, where an AI agent might simultaneously: Process Management: Track hundreds of concurrent document reviews Coordinate multiple specialist teams (tax, regulatory, employment, etc.) Manage complex dependencies between different workstreams Adjust timelines and resources in real-time based on progress and bottlenecks Resource Optimization: Identify when specific knowledge is needed and route work accordingly Predict potential bottlenecks before they occur Suggest resource reallocation based on changing priorities Monitor work patterns to optimize team efficiency Quality Control: Maintain consistent analysis criteria across different review teams Flag potential inconsistencies in approach or interpretation Track and analyze review patterns to identify potential quality issues Generate comprehensive audit trails of all decisions and actions AI agents are also transforming how organizations develop and improve products. Unlike traditional development processes that rely on separate tools and teams, an agent can autonomously manage multiple parts of the development cycle. For example, in equipment development: Design Phase: Analyze market requirements, generate initial designs, and simulate performance Component Specification: Research components, evaluate alternatives, and optimize selections Testing and Refinement: Coordinate prototype testing, analyze feedback, and suggest improvements Production Planning: Develop manufacturing plans, source materials, and optimize supply chains Part 4: Understanding the Risks and Challenges The challenge of controlling AI agents goes beyond simple programming errors or bugs. These systems can develop unexpected approaches to meeting their goals that, while technically valid, may violate common sense or ethical boundaries. This "creative problem-solving" can manifest in concerning ways: Goal Interpretation Issues: A scheduling agent tasked with maximizing meeting efficiency might start canceling "non-essential" meetings without understanding their true importance A document management agent focused on information access might share sensitive data too broadly in the name of "collaboration" A workflow optimization agent might create unrealistic deadlines by failing to account for human factors Real-World Examples: An AI agent in a video game discovered it could achieve higher scores by exploiting game mechanics in ways that defeated the intended challenge A trading algorithm developed novel but potentially risky trading strategies that human traders hadn't anticipated An AI system tasked with optimizing resource allocation began hoarding resources in ways that created system-wide inefficiencies Traditional AI governance frameworks rely heavily on human oversight, but AI agents present unique challenges that make this model increasingly difficult to implement effectively: Scale and Speed Issues: Agents can make thousands of decisions per second, far beyond human capacity to monitor The complexity of decision chains makes it difficult to trace cause and effect Interactions between multiple agents can create emergent behaviors that are hard to predict or control Comprehension Challenges: Agents may develop strategies that seem irrational to humans but are actually ideal within their given parameters The reasoning behind agent decisions may become increasingly opaque as systems become more sophisticated Traditional explanation methods may not capture the true complexity of agent decision-making Security and Privacy Implications: New Vectors, New Vulnerabilities The autonomous nature of AI agents creates novel security and privacy challenges that go beyond traditional cybersecurity concerns: Security Risks: Agents might find creative ways to bypass security controls in pursuit of their objectives The interconnected nature of agent systems creates new attack surfaces Malicious actors could manipulate agent behavior through subtle interference with their input data Privacy Concerns: Agents might combine seemingly innocuous data in ways that reveal sensitive information The ability to access multiple systems simultaneously could lead to unauthorized data correlation Agents might store or process personal information in unexpected ways while pursuing their goals Part 5: Making AI Agents Work Imagine you're planning to hire a highly capable but somewhat unpredictable new employee - one who can work 24/7, process vast amounts of information, and take initiative in ways that could either brilliantly advance your objectives or cause unexpected headaches. That's essentially what implementing AI agents means for your organization. Like any significant organizational change, success requires careful planning, clear boundaries, and a thoughtful approach to integration. Think of implementing AI agents like teaching someone to swim. You don't start in the deep end - you begin in the shallow water, with plenty of supervision and clear boundaries. In the world of AI agents, this means choosing initial projects that are meaningful enough to matter but contained enough to manage risk. Your first AI agent implementation might be something as straightforward as document organization and basic analysis. Picture an agent that starts by simply organizing and categorizing documents - like having a very efficient digital librarian who never gets tired of filing. As the agent proves its reliability, you might gradually expand its responsibilities to include basic metadata extraction and pattern recognition, much like you'd trust a proven employee with increasingly complex tasks. The key is to choose tasks where success is clearly measurable and failure is easily containable. For instance, one large law firm began their AI agent journey with a simple document categorization system. When that proved successful, they expanded to basic contract analysis, then to more complex document review tasks. Each step built confidence and capabilities while managing risk. Remember the paperclip maximizer we discussed earlier? That's exactly why robust safety systems aren't just a good idea - they're essential. Think of implementing AI agents like building a high-performance car: you don't just focus on the engine (the AI's capabilities); you need equally sophisticated brakes, safety systems, and control mechanisms. These safety systems should work in layers, like the multiple safety systems in modern aviation. Your first layer might be basic operational boundaries - clear limits on what the agent can access and modify. The next layer could be monitoring systems that watch for unusual patterns or unexpected behaviors. Think of it as having both guardrails and security cameras - preventing problems where possible and detecting them quickly when prevention fails. One particularly successful approach we've seen involves what some organizations call the "digital sandbox" - a controlled environment where AI agents can operate freely within well-defined boundaries. Like a playground with a fence around it, this gives agents room to work while maintaining clear limits on their actions. Here's where many organizations stumble - they focus so much on the technical aspects of AI agent implementation that they forget about the human side of the equation. Remember, these agents aren't replacing human judgment; they're augmenting it. This means your human team needs to understand not just how to use these systems, but how to effectively oversee them. Consider how air traffic controllers work with automated systems. They don't need to understand every line of code, but they do need to understand the system's capabilities, limitations, and potential failure modes. Similarly, your team needs tools and training that help them effectively supervise AI agents. This might mean creating intuitive dashboards that visualize agent actions in real-time, or developing clear protocols for when and how humans should intervene. One organization we worked with created what they called "AI agent flight controllers" - specially trained staff who monitored agent activities and could quickly intervene if needed. Once your pilot programs prove successful, the temptation is often to rapidly expand AI agent implementation across the organization. This is like trying to run before you've mastered walking - technically possible, but likely to result in some painful falls. Instead, think of scaling as a gradual expansion of territory. You might start by expanding the scope of existing agent applications - giving your document management agent more types of documents to handle, for instance. Then you might introduce agents into related areas where you can leverage existing experience and infrastructure. Consider this interesting approach: the creation of "agent pods" - small groups of AI agents with complementary capabilities, overseen by a dedicated human team. Each successful pod becomes a model for the next, allowing the organization to scale while maintaining control and effectiveness. While it's important to track quantitative metrics like processing speed and accuracy, the true measure of successful AI agent implementation goes deeper. Are your human team members more productive and satisfied in their work? Are you handling more complex challenges more effectively? Has the quality of your services improved? Think of it like measuring the success of a new team member. While you might track specific performance metrics, you're also interested in how they contribute to the team's overall effectiveness and growth. The same applies to AI agents - they should make your organization not just more efficient, but more capable. Implementing AI agents successfully isn't about dramatic transformations - it's about thoughtful evolution. Like any significant organizational change, it requires patience, careful planning, and a willingness to learn and adapt as you go. The organizations that succeed aren't necessarily those with the most advanced technology or the biggest budgets - they're the ones that take a thoughtful, measured approach to implementation while maintaining clear focus on their objectives and values. Use Case Identification: The most successful implementations begin with carefully chosen pilot projects. Look for use cases that are: Well-defined with clear success metrics Important enough to matter but contained enough to manage risk Supported by quality data and clear processes Aligned with existing compliance frameworks Part 6: Looking to the Future As we move forward with AI agents, the key challenge isn't just controlling these systems - it's defining what control means when dealing with autonomous systems that can operate at scales and speeds beyond human understanding. Success will require: Developing new frameworks for oversight and governance Creating better tools for understanding agent decision-making Building systems that can effectively balance autonomy with control Training humans to work effectively alongside AI agents The future workplace won't be about humans using AI tools - it will be about humans and AI agents collaborating as colleagues, each bringing their unique strengths to the table. Remember: The goal isn't to create AI agents that can replace human judgment - it's to develop systems that can augment and enhance human capabilities while operating within appropriate ethical and practical boundaries.
Against (legal tech) Customization
Purchasers of legal tech, don’t get too creative: Buy “configurable” tech, not “customizable” “My cousin Cleatus says he’ll get it running in a year or two, just as soon as he’s done customizing it.” My uncle Chuck had a ski boat. He entrusted it to my Dad because he was tired of taking care of it. In exchange for maintaining the boat, my Dad could use it all he wanted. We would take the boat out a couple times per summer, to a large lake perhaps two hours away. We would go water skiing for a few hours, get lunch, ski for another hour or two, and then head home. Once we got home, we had to clean it and store the boat, which was a pain and took hours. Another thing: For a “free” boat, it sure was costly. It consumed a lot of gasoline, and was expensive to store and maintain. It was also old and burned a lot of oil. Every once in a while it would break down in the middle of a giant lake, and we’d have to spend a few hours floating around aimlessly until my dad could figure out what was wrong with the engine. Overall, as many others have observed, having a boat was an expensive, high-maintenance proposition that brings way more responsibility than you might think. Well, a lot of legal technology is the same way. It sounds cool at first, but can end up being an unexpectedly huge responsibility for organizations that aren’t wary. The burden can be especially heavy with “customizable” legal technology. “Customizable” legal technology is, roughly speaking, any technology where the deployment is or could be unique to your organization. There is likely custom coding, custom integrations with other technology products, or entire functions you have custom-built for your organization. Although there may be other companies besides yours using the software, no other company using this software will have the exact same combinations of screens, buttons, functionalities, etc ., that you have. “Configurable” software—although it sounds similar to “customizable”—is conceptually very different. Nobody is writing any custom code. Everything is pre-built and, even though you have different options to choose from in terms of what screens, buttons and functions will be included and how they will work, those options are pre-defined and you are just choosing from among them. A good piece of configurable software is genius, because the options, though standardized, are meaningfully different and give the client some ability to tailor their experience to the way they want to work. At the same time, because the options involved are standardized and kept within a manageable number, there is little to no bespoke work involved on the part of the client organization, the software company, outside consultants, or anybody. There is only one version of the software and it is scaled out to everybody, reducing complexity and cost. The reduction in complexity also allows the software company to concentrate on a long-term strategic vision that is going to bring about the greatest good for the greatest number of clients, rather than getting distracted by the idiosyncratic needs of individual clients that do not scale and will benefit only them. Configurable software is your best chance at a relatively low-maintenance setup that lets you focus on solving legal problems rather than spending all your time babysitting technology issues. In contrast, customizable software is more like my uncle Chuck’s boat. Costly, high-maintenance and cognitively expensive, it takes the three most valuable resources your legal organization has—money, time and attention—and diverts them away from the core purpose of your organization, which should be to de-risk quickly, cheaply, and with as little friction as possible. Instead of focusing on legal problems and de-risking, your people are now ensnarled in a quagmire of technology and process issues that could have been avoided entirely if your organization hadn’t tried to customize its experience. The people selling customization will tell you different. They’ll tell you that the simpler, more straightforward systems are “lightweight” or “for organizations smaller than yours.” You don’t want a Honda Accord—do you? Nah, a big, important legal organization like yours needs a Cadillac. You see, you’re not just any legal organization. You’re a very special one. An important one, with unique needs that others don’t have. Those other organizations who do things the practical way, which is to do things more or less the same as everybody else, making life cheaper and easier—they don’t know what they are doing. Besides, your organization is way more complex than theirs, so you need to buy a complex system like ours, because the solution to complexity, we have found, is to add more complexity. Get it? Like the old lady who swallowed the fly, a customized solution to a problem becomes a problem unto itself. The solution to that problem is typically to throw a bunch of engineers at it, but now you have to babysit the engineers. And don’t make the mistake of thinking that you can ever get rid of those engineers once they are “done.” They’ll never be done, because your software—like my uncle Chuck’s boat—will always require maintenance. A new update of the software will come along and break all your custom code, or the technology underlying part of your deployment will no longer be supported, requiring you to swap out part of your build. If you are not careful, your organization could end up with a small army of engineers, project managers, business analysts, and other folks whose only raison d’etre is to spend all day putting out IT fires and keeping the wheels from flying off the train. Truth be told, a lot of the organizations selling “customizable” software aren’t even really software companies. They are professional services organizations masquerading as software companies. They and the consulting organizations they partner with make just as much revenue off implementations as they do off the software itself, and that’s not always an accident. By steering clients into needlessly complicated software and distracting them with 100s of bells and whistles and other bright, shiny objects—many of which may never be used, not even once, not even by one client-- they take their unsuspecting victims on what I would call a “customization Odyssey”: A multiyear journey where the client organization spends hundreds of hours on Dilbert-esque Zoom calls about scintillating issues like whether you want the purple button on the right side of the screen or the left. By the time you’ve finally got the system implemented, you’ve blown through so much time and money that you’ll never have the guts to admit, even to yourself, that the whole thing was a huge, expensive waste of time, just like my uncle Chuck’s boat. Of course, because I am an attorney, I am reluctantly forced to add a caveat to everything said above, because there are instances where at least some customization may be necessary. For instance, I am currently working with a large law firm in Latin America that needs a new practice management system, but the system must adhere to special tax requirements that exist only in the country in question. Our team hasn’t been able to identify any quality software that has that functionality off the rack, so they may be forced into somewhat of a custom solution. But even then, when some customization is necessary, I would minimize or eliminate customizations that aren’t actually necessary. Instead, go with a “vanilla” implementation and go with an approach that is the same as the median client using the software platform in question. Life is just easier that way. In conclusion, don’t be creative. Don’t be a special snowflake. Keep it simple, put in a decent system, and move on. If you want to know more, buy my book, How to Buy Legal Technology that Works . Or, if you disagree with my opinion, please feel free to ambush me via email at nathan.cemenska@forthright-consulting.com or via LinkedIn .
Data Literacy: A Critical Skill for Legal Professionals
The legal profession is undergoing a seismic shift, driven by the explosion of data and the adoption of sophisticated Generative AI technologies like OpenAI's GPT-4. Legal teams are no longer just advisors on regulatory compliance or dispute resolution—they are now key players in data governance, risk management, and strategic decision-making. As data increasingly underpins every facet of business operations, general counsels (GCs) and legal professionals must develop the skills to interpret, analyze, and leverage this data effectively. This isn’t about becoming data scientists but about equipping legal teams with the necessary data literacy to navigate this new terrain confidently. Data literacy isn’t a buzzword; it’s a fundamental skill set that empowers legal professionals to handle complex challenges like data privacy compliance, AI bias assessment, and contract analytics. As the lines between legal and technological landscapes blur, mastering data literacy enables legal teams to deliver more precise, proactive, and strategic guidance. Let’s delve into why data literacy is a game-changer for modern legal professionals and how it transforms the role of legal teams from reactive enforcers to proactive business enablers. Why Data Literacy Matters The intersection of legal tech and data creates both opportunities and challenges. To truly leverage these opportunities, legal professionals must take specific, actionable steps toward becoming data literate. This means moving beyond understanding why data literacy matters and actively integrating data-driven practices into daily legal work. Here’s how data literacy transforms legal practice and what you can start doing today. 1. Enhancing Data Privacy and Compliance With regulations like the GDPR and CCPA setting high standards for data privacy, legal professionals must develop a hands-on approach to interpreting complex data privacy reports. This involves more than just skimming through compliance dashboards; it requires diving into how data is collected, processed, and stored. Legal teams should start by actively participating in their organization’s privacy audits and requesting detailed reports from data management teams. Beyond reviewing these reports, legal professionals should schedule regular cross-departmental meetings to discuss potential vulnerabilities and collaborate on solutions. A proactive GC might take the initiative to create a "data privacy task force" within the legal team, ensuring continuous monitoring and swift action in response to new regulations or detected breaches. Engaging in simulated breach scenarios can also prepare teams for real-world incidents, enhancing their ability to respond effectively under pressure. 2. Leveraging Litigation Analytics Litigation is data-intensive, and making sense of this data is crucial for managing risks effectively. GCs and their teams should begin by gathering and analyzing historical litigation data, not as a one-time exercise but as an ongoing practice. Start by setting up a quarterly review of all litigation data to spot trends—this might reveal patterns like a surge in employment-related lawsuits or shifts in IP case outcomes. To transform insights into action, GCs should develop tailored litigation strategies based on these findings. For instance, if employment litigation is on the rise, they could implement targeted training programs or policy changes to mitigate future risks. Additionally, they should engage directly with litigation support teams to ensure they understand the data's context and implications, which can inform both legal strategy and resource allocation. 3. Optimizing Contract Management Contract management can be streamlined significantly through the use of AI-driven legal tech tools, but these tools are only as good as the professionals interpreting their outputs. Legal teams should take an active role in customizing contract analytics tools to align with specific business needs. Begin by identifying the most common contract issues or risks within the organization, such as non-standard clauses or compliance lapses, and configure the analytics tools to flag these automatically. Legal professionals should also establish a routine for reviewing AI-generated insights—monthly or quarterly contract audits can ensure continuous improvement. Additionally, they can initiate training sessions with procurement and sales teams to ensure everyone understands the insights these tools provide, fostering a culture of data-driven decision-making throughout the organization. 4. Streamlining E-Discovery E-discovery is a complex, data-heavy process that benefits enormously from a structured, data-literate approach. Legal professionals should actively participate in the selection and evaluation of e-discovery tools. Rather than relying solely on IT or external vendors, GCs should lead the charge in testing and optimizing predictive coding and machine learning tools. To take this further, legal teams can create a feedback loop where they regularly assess the accuracy and efficiency of the e-discovery process, adjusting their approach as needed. This might involve conducting periodic "post-mortems" on e-discovery projects to identify what worked, what didn’t, and how processes can be improved. Legal professionals should also advocate for training in these tools for their entire team, ensuring everyone involved in litigation is adept at managing and interpreting e-discovery data. 5. Driving Efficiency with Legal Operations Metrics Operational metrics provide a goldmine of information for improving the efficiency of legal departments. Legal professionals should not wait for quarterly reports to assess performance. Instead, they should implement real-time dashboards that track key metrics such as case resolution times, legal spend, and workload distribution. Regularly scheduled check-ins—perhaps bi-weekly or monthly—can help teams stay on top of these metrics and make necessary adjustments. Beyond monitoring, GCs should lead efforts to optimize workflows based on this data. This might involve reallocating resources to balance workloads more effectively or renegotiating vendor contracts to reduce costs without sacrificing quality. Encouraging a culture of transparency where team members feel empowered to suggest data-driven process improvements is also crucial for long-term success. 6. Mitigating AI Bias As organizations increasingly adopt AI, addressing potential biases becomes a legal imperative. Legal professionals should actively engage with bias detection tools, taking the time to understand the nuances of their reports. This is not a one-off task but an ongoing responsibility that requires consistent vigilance. GCs should spearhead initiatives to conduct regular bias audits, particularly in areas where AI is making critical decisions, such as recruitment or customer service. Collaborating closely with data scientists and HR teams, they can ensure that bias mitigation strategies are implemented effectively. Legal teams should also advocate for transparent reporting on AI decisions, pushing for accountability and ethical AI use throughout the organization. Building Data Literacy in Legal Tech Developing data literacy requires a deliberate and ongoing effort. Legal professionals should take advantage of every opportunity to deepen their knowledge and apply it in practical ways. Here’s how to get started: Invest in Training and Education: Don’t wait for formal programs to come to you. Seek out courses and workshops on data analytics and AI. Consider attending legal tech conferences, participating in online bootcamps, or online legal education programs. Foster Cross-Departmental Collaboration: Regularly collaborate with data scientists, IT teams, and other departments to understand how data flows through your organization and how it can be better leveraged. Create a Culture of Experimentation: Encourage your team to experiment with data tools and analytics. Start small with pilot projects and expand as you gain confidence and insight. Commit to Continuous Learning: Stay current with the latest advancements in legal tech. Join industry forums, subscribe to legal tech publications, and participate in webinars to keep your skills sharp. The Strategic Advantage of Data Literacy Incorporating data literacy into the legal tech function doesn’t just improve legal operations—it transforms them. By becoming data-literate, legal professionals can proactively manage risks, optimize processes, and drive strategic decision-making across the organization. This shift from reactive to proactive legal practice not only enhances the value of the legal team but positions it as a cornerstone of business success in the digital age. Further Resources and Reading To further your journey into data literacy and its application in legal, consider exploring these resources: " The Legal Tech Ecosystem " by yours truly – An insightful exploration of how technology is transforming the legal profession. " Data-Driven Law: Data Analytics and the New Legal Services " by Edward J. Walters – A deep dive into how data analytics is reshaping legal services. Legal Conferences – Events like ILTACON , CLOC Global Institute , and ABA TECHSHOW provide hands-on learning and networking opportunities. Online Courses on Data and AI – Platforms like Hotshot Legal , Praktio , Coursera , and edX offer valuable courses tailored for legal professionals. Legal Blogs and Podcasts – Stay informed with blogs like Artificial Lawyer and Law Next and podcasts such as The Legal Ops Podcast and Dear Legal Ops . Colin S. Levy is a lawyer, speaker, and author of the book The Legal Tech Ecosystem . Throughout his career, Colin has seen technology as a key driver in improving how legal services are performed. Because his career has spanned industries, he witnessed myriad issues, from a systemic lack of interest in technology to the high cost of legal services barring entry to consumers. Now, his mission is to bridge the gap between the tech world and the legal world.
Transforming Client Engagement: How AI is Revolutionizing Your Law Firm’s Efficiency
Leverage AI agents to automate client intake, document management, and communication, allowing legal teams to focus on what matters most. The legal industry is undergoing a profound transformation, with artificial intelligence (AI) playing a pivotal role in reshaping traditional processes. In a law firm, one of the most time-consuming and resource-intensive tasks is client engagement—gathering client information, obtaining case facts, and organizing supporting documents. This process, while critical to a successful outcome, can often overwhelm attorneys, particularly those managing numerous cases. Enter AI, specifically large language models (LLMs) and AI agents, which are revolutionizing how law firms handle client intake and case management. By automating these processes, AI agents can significantly reduce the time spent on routine tasks, allowing attorneys to focus on legal analysis and strategic thinking. This article explores how AI can streamline client engagement in law firms, from collecting case details to organizing and summarizing information for easy attorney review. The Role of AI in Client Engagement Client engagement traditionally involves numerous interactions, from the initial consultation to gathering relevant documents and case details. Attorneys typically spend considerable time sorting through client communications, manually requesting missing information, and organizing materials for review. However, AI agents, powered by sophisticated LLMs, can automate these repetitive tasks, making the process faster and more efficient. Here’s a closer look at how AI can help at each stage of client engagement: Using AI Agents to Collect Client Information One of the most powerful applications of AI in legal practices is its ability to serve as a “virtual assistant” for attorneys. In the client engagement process, AI agents can interact directly with clients to gather critical information and documents, without the need for attorney involvement at every step. How It Works: Automated Questionnaires : AI agents can initiate contact with clients by sending automated questionnaires tailored to the specific case type. These questionnaires ask the right questions to extract key details relevant to the case, such as names, dates, events, and case-specific facts. AI-driven conversations are intuitive, allowing clients to provide detailed responses in natural language, making the process smoother. Document Requests : In addition to collecting basic facts, AI agents can request supporting documents, such as contracts, medical records, or financial statements. Clients can upload these documents through a secure portal, and the AI can automatically read, organize and categorize them. Client Follow-Up : If any essential information or documents are missing, the AI agent can follow up with the client to request the necessary materials, reducing the administrative workload for the attorney. By using AI to manage this initial phase of client intake, law firms can save valuable time and improve the client experience. The process is seamless, with clients receiving prompt responses and guidance on what information is needed. Organizing and Summarizing Information Once the relevant client information and case materials have been collected, another AI agent can take over to organize and summarize the data. This reduces the time attorneys spend reviewing documents and ensures they have a clear, concise overview of the case before diving deeper. Examples: Document Categorization : The AI agent can sort through client-submitted documents, categorize them based on type (e.g., contracts, emails, financial records), and flag any documents that require further review. Summarizing Key Facts : One of the most valuable features of AI agents is their ability to summarize large amounts of information. For example, if a client provides a lengthy written statement or a large number of documents, the AI can summarize the key points, extracting the most relevant facts, dates, and case-related details. This allows attorneys to get up to speed quickly without needing to comb through every document manually. Creating Case Briefs : In some instances, the AI agent can go beyond simple summarization and create a structured case brief. This brief might include a timeline of events, key parties involved, important legal issues, and a summary of relevant documents. With a well-organized brief in hand, attorneys can make more informed decisions and plan their legal strategies more effectively. Enhancing Attorney-Client Communication One of the challenges attorneys face during client engagement is maintaining clear and consistent communication. Clients often need to provide clarifications or additional information as the case progresses. Traditionally, this involves numerous back-and-forth exchanges, which can be time-consuming. AI agents can simplify this process by acting as an intermediary. Examples: Clarification Requests : When an attorney reviews the initial case summary or client-submitted documents and needs more details or clarification, they can instruct their own AI agent to request specific follow-up information from the client. The AI agent can craft a tailored request, asking precise questions to fill in any gaps. Ongoing Client Updates : AI agents can also manage ongoing communications with clients, sending updates about case progress or reminders about upcoming deadlines. This ensures that clients stay informed while reducing the administrative burden on attorneys. By utilizing AI agents to enhance communication, attorneys can improve responsiveness to client needs, ensure nothing falls through the cracks, and deliver better service without being bogged down by administrative tasks. Benefits of AI for Law Firms Integrating AI agents into your law firm’s client engagement process offers several benefits that go beyond time savings: Increased Efficiency : With AI handling the more routine aspects of client engagement, attorneys can focus their energy on higher-level legal work. This increased efficiency can lead to shorter case timelines and faster client resolutions. Reduced Errors : AI agents are programmed to consistently follow up with clients, ensuring that no information is overlooked or forgotten. This can lead to more accurate case documentation and reduce the risk of human error. Cost Savings : Automating the client intake and document management process can reduce overhead costs, as fewer administrative staff are needed to manage these tasks manually. For smaller firms or solo practitioners, this can be a significant competitive advantage. Improved Client Satisfaction : Clients expect quick and efficient service from their attorneys. By leveraging AI agents to manage the early stages of client engagement and keep clients informed, law firms can provide a more seamless experience, leading to higher client satisfaction and retention. The Future of AI in Law The use of AI in law firms is still in its early stages, but the potential is vast. As LLM technology continues to improve, AI agents will become even more capable, handling increasingly complex tasks and providing more sophisticated analysis. While AI won’t replace attorneys, it will undoubtedly become an essential tool that enhances legal professionals’ ability to provide high-quality service more efficiently. In the future, we can expect AI to take on even more roles in law firms, from legal research and contract drafting to predicting case outcomes and recommending legal strategies. Law firms that embrace AI now will be well-positioned to lead in the future, offering superior client engagement and legal services.= Conclusion AI agents are transforming the way law firms handle client engagement, providing a streamlined, efficient process for collecting information, organizing case materials, and improving attorney-client communication. By adopting AI technology, law firms can reduce the administrative burden on attorneys, improve client satisfaction, and ultimately deliver better legal services. Embracing AI is not just a competitive advantage—it’s becoming a necessity for modern law practices. – Hans Guntren is an experienced technology executive, product strategist, and founder with a strong background in digital transformation and a passion for using technology to address real-world problems. As Co-Founder of Deliberately.ai , Hans is dedicated to using AI to enhance the client engagement process for attorneys. Deliberately.ai employs advanced AI agents to gather and organize client information, summarize documents, develop situational awareness of cases, propose legal strategies, assess risks, design settlements, facilitate negotiations, and prepare court documents. By streamlining these processes, Deliberately.ai enables attorneys to focus more time and energy on strategic, high-value work.
Embracing the Future of Legal Tech: Insights from Clio, AI, and the Power of Legal Transformation
The legal profession is at a critical inflection point. The forces of technology, particularly AI, are fundamentally reshaping how law is practiced and experienced. I had the privilege of sitting down with Jack Newton , Founder and CEO of Clio , at the Innovate Legal Summit 2024 in London for a special recording of the Legally Speaking Podcast . Our discussion illuminated the role AI is playing in transforming the legal profession—and why it’s crucial for firms of all sizes to embrace this shift. Jack’s powerful statement set the tone for what may become a defining principle for the next decade: “AI won’t replace lawyers—but lawyers who use AI will replace those who don’t.” This message resonated throughout the summit, capturing the urgency and potential of integrating AI into legal practice. For law firms that want to stay competitive, this isn’t a matter of convenience—it’s a matter of survival. The AI Revolution in Legal Practice The summit, held at the stunning Canada House, was the perfect venue to explore the future of legal technology. Jack’s keynote presentation made it clear: AI is a force multiplier for legal work, allowing legal professionals to automate repetitive tasks, analyse vast amounts of data, and enhance client experiences. But even more importantly, AI is breaking down barriers to access justice, making legal services more affordable and scalable for firms and clients alike. In recent years, technologists have been convincing the legal community to adopt cloud-based solutions. Today, that conversation has flipped. Lawyers themselves are now driving demand for AI tools, recognising that the technology is not just an option but a necessity for delivering high-quality legal services in the 21st century. Clio's Vision for AI and Legal Transformation Under Jack’s visionary leadership, Clio is leading this legal tech revolution. Their record-breaking $900 million Series F funding will enable Clio to further develop its AI capabilities and make these tools more accessible to small and mid-sized law firms. These practices, often underserved by tech innovation, stand to benefit significantly from AI-driven solutions that help them compete on a level playing field with larger firms. The future of legal tech doesn’t just lie in developing smarter tools—it lies in developing ethical tools. During our conversation on the Legally Speaking Podcast (listen here), Jack emphasised the importance of ethical AI development, particularly in a profession where trust, privacy, and confidentiality are paramount. Clio’s commitment to building AI tools that are both powerful and ethical is shaping the future of legal tech and helping the profession navigate this critical digital transformation. Piers Linney on AI: A "Once-in-a-Species Opportunity" Another keynote speaker who left an indelible mark on the summit was Piers Linney, former Dragon’s Den (Shark Tank) investor and Co-Founder of Implement AI. Piers captured the audience’s imagination with his description of AI as a “once-in-a-species opportunity.” His insights on the transformative power of AI across industries, particularly law, challenged every attendee to think about the next steps in their own AI journey. Piers' deep expertise, stemming from his entrepreneurial ventures in AI, technology, and media, added a unique perspective. His company, Implement AI, focuses on leveraging AI to drive intelligent business transformation. His message aligned perfectly with Jack Newton’s vision—those who strategically implement AI today will be the leaders of tomorrow. Piers also emphasised that AI has the potential to democratise legal services, giving smaller firms access to tools that can level the playing field with larger practices, a point that particularly resonated with the audience. Democratising Legal Services and Access to Justice One of the most exciting aspects of the legal tech revolution is how AI is enabling greater access to justice. Smaller firms can now compete with larger, more established firms by using AI to streamline their operations, reduce costs, and offer more efficient services to clients. The scalability of AI allows firms to provide legal support that is faster, more personalised, and more affordable, making legal services more accessible to all. This theme of democratisation aligns with my work as an advisor to Caseguru , a cutting-edge AI-driven platform that simplifies the process of finding and collaborating with the right lawyer. Caseguru uses AI to craft precise case summaries, match clients with suitable legal professionals, and manage cases through an integrated platform. This is just one example of how AI is reshaping the legal profession—making legal expertise more accessible, while helping lawyers focus on providing strategic counsel and delivering justice. The Role of the Legally Speaking Podcast in Leading Legal Innovation As host of the Legally Speaking Podcast, I am committed to facilitating these important conversations and sharing insights from the most innovative voices in the legal world. Our mission is to become the number one legal careers show globally , helping legal professionals at every stage of their journey—whether they’re aspiring lawyers, seasoned professionals, or curious about the future of law. We aim to inspire, educate, and entertain, aligning with Clio’s mission to transform the legal experience for all. On the Legally Speaking Podcast ( listen here ), we delve deep into the role that AI, cloud technology, and other emerging tools are playing in transforming legal practice. The podcast provides a platform for leading figures like Jack Newton and Piers Linney to share their expertise and inspire our global community. It’s a show designed to make legal knowledge accessible to everyone, offering unique insights from legal tech innovators, policymakers, and legal professionals. The Time to Embrace AI is Now The message from the Innovate Legal Summit 2024 is clear: the future of law is here, and AI is at the forefront of this transformation. Lawyers who embrace technology will succeed, while those who resist risk being left behind. AI and other emerging technologies aren’t just tools—they’re catalysts for a new era of legal practice defined by efficiency, accessibility, and client-centred service. As someone deeply invested in building a global legal community, I’m excited to see how AI will continue to drive innovation and make legal services more accessible. Whether through the work being done by Clio or Caseguru the future of law is filled with opportunity for those ready to embrace change. For those eager to explore these themes further, I encourage you to listen to my in-depth conversation with Jack Newton on the Legally Speaking Podcast , where we discuss AI’s transformative potential and the future of legal practice. Join our Legally Speaking Club Discord Community to continue the conversation, exchange ideas, and stay updated on the latest legal tech trends. The future is bright for those willing to embrace it. Together, we can shape the next chapter of legal innovation. Robert Hanna Founder & Managing Director of KC Partners Host of the Legally Speaking Podcast Advisor to Caseguru Advocate for Legal Tech Innovation, and Legal Community Builder

COLIN'S INSIGHTS
Colin's Insights on legal tech cover a wide range of topics in a succinct form. Anything and everything goes here at the intersection of technology and law, from the use of artificial intelligence in the legal field to the effect of emerging technologies on the practice of law.
Enjoy these valuable quick reads by joining Colin's website and get notified when new posts are uploaded. Sign-up is easy and free to all.