When Everyone Has Information, Judgment Becomes Everything
- Colin Levy
- 4 hours ago
- 10 min read
We're drowning in information. Every legal tech vendor promises transformative results. Every AI tool claims to revolutionize practice. Meanwhile, clients are overwhelmed by options, partners are paralyzed by choices, and associates are buried in data they don't know how to interpret.

Judgment is becoming one of the scarcest resources in the legal profession. Not the only thing that matters. Not a replacement for knowledge or experience. But increasingly, the differentiator between lawyers who add value and those who don't.
The problem: we don't really know what we mean when we talk about judgment.
We use it as a catchall for "good lawyering" without examining what it actually entails. In a world where algorithms can analyze cases faster than any human, where AI can spot patterns we'd miss, we need to get specific.
If we can't define judgment precisely, we can't develop it systematically. And if we can't develop it, we'll struggle to explain why our judgment is worth paying for.
Judgment Isn't What We Think It Is
Ask most lawyers what judgment means and you'll get vague answers about "knowing the right thing to do" or "understanding the law deeply." These aren't wrong. They're incomplete.
Judgment isn't primarily about legal knowledge, though it requires legal knowledge as a foundation. You can't exercise good judgment on securities issues without understanding securities law. But knowing securities law doesn't automatically give you good judgment about when to push a disclosure issue versus when to let it go.
Judgment isn't about experience either, at least not directly. Experience informs judgment. But I've seen senior lawyers repeat the same mistakes for thirty years, and young lawyers who grasp strategic nuance that escapes their more seasoned colleagues. Experience creates opportunities to develop judgment. It doesn't guarantee it.
So what is judgment?
Judgment is the capacity to discern what matters from what doesn't in a specific context. To separate signal from noise. To know which facts are relevant and which are distractions. To understand when the law is the answer and when it's beside the point.
Notice the qualifier: in a specific context. Good judgment isn't abstract wisdom. It's situation-specific discernment informed by knowledge, experience, and understanding of the particular client, industry, and circumstance.
The Filtering Function
In a world saturated with information, judgment functions primarily as a filter.
When a client comes to you with a problem, they give you facts. Some relevant, some not. Some they think matter that don't. Some they don't mention that do. Your first job is figuring out which is which.
AI can process all those facts. It can identify legal issues, flag risks, suggest precedents. What it can't reliably do is know which facts actually matter for this client, in this situation, with these specific goals and constraints.
Last month I reviewed a distribution agreement where AI flagged seventeen potential issues. Technically, all seventeen were legitimate concerns and yet only three actually mattered.
Knowing which three to focus on required judgment. But that judgment was only possible because I understood the business context, had seen similar deals, and knew what battles were worth fighting for companies in this position.
Judgment Requires Foundation, Doesn't Replace It
Judgment doesn't replace knowledge. It builds on it.
A first-year associate with excellent judgment but no employment law knowledge still can't advise on employment matters. The foundation has to be there first. AI hasn't changed this. What's changed is that foundational knowledge alone isn't enough anymore.
When AI can access and process legal information faster than any human, being the lawyer who knows the most case law isn't the competitive advantage it once was.
The competitive advantage is knowing what to do with that information. When to apply it, when to question it, when to override it based on factors the algorithm can't see.
Knowing What Questions to Ask
Junior lawyers think judgment is about having answers. Senior lawyers know it's about asking the right questions.
But this isn't mystical. Sometimes clients actually do know exactly what they need and asking them to reconsider the question is overthinking. The skill isn't always finding "the question behind the question." It's knowing when to look for it.
AI is phenomenally good at answering questions. Ask it to find relevant case law, it will. Ask it to identify risks in a contract, it will. Ask it to draft language addressing a specific issue, it can.
What AI can't reliably do yet is recognize when the question you're asking isn't the question you should be asking. When a startup founder asks about IP assignment provisions, sometimes they're asking about IP assignment provisions. Sometimes they're actually trying to navigate a deteriorating relationship with a co-founder. Knowing which situation you're in requires context AI doesn't have.
This is judgment informed by specific knowledge about startup dynamics, founder relationships, and how IP disputes typically arise. Not pure intuition.
The Confidence to Override the Algorithm
One of the most important forms of judgment right now is knowing when to trust AI and when to ignore it.
AI outputs come with an aura of objectivity. The algorithm analyzed thousands of contracts and says X is market. The model reviewed hundreds of cases and predicts Y outcome.
Sometimes that's exactly what you need. Sometimes it's dangerously misleading.
I've seen AI analyze employment contracts and flag certain non-compete provisions as "below market" based on data from public company agreements. Technically accurate. Completely irrelevant for a small business in a different industry with different competitive dynamics.
Judgment is knowing the difference. But sometimes the algorithm is right and your intuition is wrong. Sometimes what you think is unique about your client's situation actually isn't, and the data-driven answer is better than your experience-based hunch.
Good judgment requires both the confidence to override AI when you have good reason and the humility to recognize when you don't.
That requires deep expertise. Not just in law, but in the domains where law applies. You can't effectively evaluate AI's contract analysis if you don't understand the business the contract serves.
When Judgment Means Saying No
Sometimes judgment means telling clients, colleagues, or partners that what they want to do is unwise.
Not illegal. Not impossible. Just unwise.
This is genuinely hard. We're trained to find ways to accomplish what clients want. We're incentivized economically to say yes. The lawyer who consistently says "you shouldn't do this" doesn't tend to get repeat business.
Last year I advised a client against pursuing a technically viable breach of contract claim. The contract terms were clear. We'd likely win. But winning would destroy a relationship with their largest supplier during a supply chain crisis, and the damages we'd recover wouldn't offset the business damage.
They didn't like the advice. They found another lawyer who would file the claim. Six months later, they settled for less than my initial recommendation and lost the supplier relationship anyway.
Was that good judgment on my part? I think so. But I can't prove it. They might have won, recovered more, and maintained the relationship. Judgment calls don't come with certainty.
We tend to talk about judgment as if experienced lawyers reliably make better calls than algorithms. Sometimes we do. Sometimes we're just confident in our mistakes.
Cutting Through the Hype
Judgment in legal tech right now means asking hard questions before adopting tools.
Not "is this innovative?" but "does this solve a real problem we have?" Not "is everyone else buying this?" but "will this actually work for how we work?"
I've seen firms implement contract analytics platforms that never get used because nobody defined what questions they needed answered. I've seen document automation systems fail because the templates they automated weren't the ones people actually used. I've seen AI tools get adopted because partners were afraid of missing out, not because anyone identified a specific problem to solve.
Good judgment means being willing to say "I don't care if every other firm has this, it doesn't make sense for us." Or conversely, "I know this seems risky, but I think it addresses a real problem."
Sometimes you can't know if something will work until you try it. Sometimes what looks like good judgment to pass on a tool is actually missing an opportunity. The firms that waited to see if "this internet thing" would matter learned that lesson.
The Nuance That Algorithms Miss (For Now)
Legal issues are rarely as clean as algorithms need them to be. The facts are messy. The law is ambiguous. The client's goals are complicated by competing interests and factors that never appear in the case file.
Judgment is the capacity to navigate this complexity right now. To hold multiple competing considerations in mind simultaneously. To recognize that the right answer depends on factors you can't quantify.
An algorithm can tell you that based on thousands of cases, you have a 73% chance of winning this motion. Judgment tells you whether winning this motion is worth the relationship cost with the judge you'll appear before ten more times this year.
AI is getting better at this kind of analysis. The gap between algorithmic and human judgment on pattern recognition tasks is narrowing. The things we claim AI "can't" do today, it might do tomorrow.
So the question isn't whether humans will always be better at nuanced judgment. It's whether we can develop and demonstrate our judgment capabilities fast enough to stay ahead of improving algorithms.
Teaching Judgment in a Tech-Driven World
If judgment is this important, we need to figure out how to develop it systematically.
Associates used to develop pattern recognition through document review. They'd review thousands of contracts and gradually learn what mattered, what didn't, what partners focused on, what they ignored.
If AI does that work now, how do junior lawyers develop that pattern recognition?
I don't have proven answers. Nobody does yet. Some hypotheses worth testing:
Give associates real responsibility earlier on matters where stakes are manageable. Let them make judgment calls, defend their reasoning, be wrong, and learn from it. This is risky. Clients aren't paying for junior lawyers to learn through trial and error. But if we don't create opportunities to develop judgment, we won't develop lawyers who have it.
Make senior lawyers explicit about their judgment process. Don't just say "this provision doesn't matter." Explain why it doesn't matter, what factors you're weighing, how you reached that conclusion. Most partners share conclusions but not reasoning. That doesn't teach judgment.
Create structured decision points where associates must evaluate what matters. What are the three most important issues in this contract? Why? What's the biggest risk the client faces? What are you not worried about and why?
Will this work? I don't know. We're in uncharted territory. Assuming judgment develops naturally without deliberate training is probably wrong.
The Speed Problem (And It's Complicated)
Good judgment requires time to think. Whether that's a problem depends entirely on where you work.
In-house, speed matters. When the business team needs an answer to close a deal, "let me think about this for three days" isn't always viable. In-house lawyers operate in the business's timeline, not the lawyer's preferred pace. Technology that handles routine work faster creates genuine value because it frees time for the judgment calls that can't be rushed.
Law firms operate under different economics. The billable hour rewards thoroughness, not speed. A partner who takes three days to think through a strategy bills more than one who reaches the same conclusion in three hours. Whether that serves the client is debatable. Whether it serves the firm's revenue is not.
This creates a real tension. Judgment often benefits from reflection time. But law firm economics don't always reward taking that time, while in-house pressures don't always allow it.
When AI handles routine work, what happens to that time? In-house counsel might genuinely use it for strategic thinking. Law firms might just fill it with more billable work.
I'm not saying law firms are deliberately inefficient. But let's be honest about the incentives. A firm that becomes dramatically more efficient through technology faces a revenue problem unless it can raise rates proportionally or attract more clients. Many firms solve this by maintaining roughly the same level of labor intensity even with better tools.
In-house doesn't face this problem. If technology makes in-house counsel more efficient, the company benefits directly through reduced outside counsel spend or better risk management. The incentives align with actual efficiency.
This matters for judgment because it affects how much time lawyers actually have for deliberative thinking. Technology creates the possibility of more time for judgment. Whether that possibility becomes reality depends on economic structure.
What Judgment Means for the Profession
As AI handles more of what lawyers traditionally did, judgment becomes increasingly important. But it's not the only thing clients pay for.
Clients hire lawyers for many reasons: regulatory requirements, risk management, insurance mandates, court rules, credentialing, fiduciary duties, and yes, judgment. Even if AI develops better judgment than humans on some dimensions, lawyers won't become obsolete. But our role will change.
The question is whether we can adapt. Whether we can:
1) Develop judgment systematically rather than hoping it emerges from experience. Demonstrate value in ways clients recognize and will pay for.
2) Build economic models that reward judgment appropriately.
3) Train lawyers whose judgment capabilities exceed what algorithms can provide. Remain honest about our limitations while confident about our capabilities.
This means being clear about what judgment actually is: not vague wisdom, but specific capabilities like filtering signal from noise, asking right questions, knowing when to override algorithms, navigating nuance, integrating contextual information.
It means acknowledging that judgment requires deep foundational knowledge.
And it means accepting that some of what we've called judgment might actually have been pattern recognition that AI will eventually do better. The judgment that matters is the kind that requires contextual understanding, relationship awareness, and integration of information that doesn't fit into data fields.
The Reality Check
In a world where everyone has access to information, where AI can analyze data faster than any human, where legal knowledge is increasingly accessible, judgment becomes more important.
But claiming judgment justifies our fees isn't enough. We have to deliver it. Develop it. Demonstrate it.
That requires being honest about what judgment is and isn't. About what we know and don't know. About when we're exercising genuine expertise-based judgment and when we're guessing confidently.
It means creating ways to develop judgment in lawyers learning the profession alongside AI. It means building practice models that value judgment enough to give it time and space. It means being humble enough to recognize when algorithms know better and confident enough to override them when they don't.
Technology isn't replacing judgment. It's making judgment more important and more visible. The question isn't whether judgment matters. It's whether we can develop, demonstrate, and deliver it at the level clients need and will pay for.
I think we can. But only if we're honest about the challenge we're facing.

