The Deployment Win That Wasn't

Congratulations. Your firm ran the pilot, secured executive sponsorship, navigated procurement, and rolled out an enterprise legal AI platform. Usage numbers looked strong in the first quarter. The innovation committee declared success. And then, somewhere between month nine and month fourteen, something quiet happened: the dashboard still showed healthy query volume, but the partners stopped asking substantive questions, the associates kept using the tool to produce work that got redone anyway, and IT started getting renewal calls that felt more like exit interviews than renewals.

This is the Second Adoption Curve problem. It is now the defining challenge in legal AI, and almost no one in the vendor community is talking about it honestly, because most vendors are still selling the first deployment.

The firms experiencing this stall are not technology laggards. Many of them ran rigorous pilots. Several have dedicated innovation directors who fought hard for the budget. The problem isn't that they bought the wrong tool. The problem is that they solved for adoption and never solved for architecture: the knowledge management architecture, the incentive architecture, and the workflow integration architecture that determine whether AI actually makes a firm smarter over time, or just makes individual tasks marginally faster.

These are different problems. And conflating them is costing firms real money and real competitive ground right now.

What the Dashboard Isn't Showing You

Here is the pattern that shows up consistently at firms hitting the 12-to-18-month wall: total query volume holds steady or grows modestly, so the headline metrics look acceptable. What the standard dashboard doesn't surface is who is querying and what decisions those queries are actually influencing.

At most plateaued firms, AI tool usage has concentrated heavily in roughly 20 percent of users, almost universally junior associates and knowledge management staff. These are not the users whose time savings move the ROI needle. A third-year associate drafting a contract summary faster is a real efficiency gain, but it is not a strategic return. The billing partners whose institutional knowledge, client relationships, and strategic judgment represent the firm's actual value creation? They've largely reverted to email, prior work product, and each other. This pattern echoes findings from McKinsey's 2023 research on generative AI adoption across professional services, which noted that initial productivity gains tend to concentrate among lower-tenure employees performing well-defined tasks, while senior professionals, whose judgment drives the most consequential decisions, remain largely outside the loop.

This concentration problem has a second dimension that is even more costly: what we might call AI shadow work. Associates are using AI tools to generate analysis and first drafts. Partners are receiving that work product, finding it insufficiently calibrated to client context or practice-specific nuance, and redoing substantial portions from scratch, often without flagging it, because flagging it would invite a conversation about the tool's value that nobody wants to have. The result is an organizational dynamic where AI is generating cost (associate time on AI-assisted drafts, plus partner time on revision) without generating the compounding return it was purchased to create. A 2024 study published in the Harvard Business Review on AI tool deployment in knowledge-intensive organizations described a structurally similar dynamic, which the authors termed "parallel processing loss," in which junior employees and senior experts work at cross-purposes on the same output, each unaware of the full cost the other is absorbing.

If your firm is measuring AI ROI primarily on time-saved-per-task metrics, you are measuring the part of the equation that flatters the investment and missing the part that explains why the investment isn't compounding.

The Knowledge Hoarding Problem Is Structural, Not Attitudinal

The instinct, when partners disengage from AI tools, is to reach for change management language: resistance to change, learning curve, generational dynamics. This diagnosis is both condescending to partners and analytically wrong. It misses something specific about how law firms are structured that makes knowledge externalization a rational act of self-diminishment, not a failure of open-mindedness.

In a law firm, a partner's institutional knowledge, including the client history, the deal structures that worked and the ones that didn't, the litigation strategies tailored to specific judges and jurisdictions, and the relationship context that determines how a negotiation will actually play out, is not just intellectual capital. It is job security. It is business development currency. It is the thing that makes that partner irreplaceable to clients and to the firm.

Investment banks and consulting firms have historically solved this problem through aggressive codification cultures reinforced by promotion and compensation structures that explicitly reward knowledge sharing. McKinsey's internal knowledge management systems are legendary because contributing to them has always been a visible, compensated behavior. The rainmaker model in law has operated on precisely the opposite logic: the partner who keeps the Rolodex close and the deal playbook closer is the partner who retains leverage. This dynamic is well documented in academic literature on professional service firms. Morten Hansen and Martine Haas, writing in the Academy of Management Journal, found that knowledge hoarding in partnership structures correlates directly with compensation models that reward individual client control over collective knowledge contribution, a finding that maps almost precisely onto contemporary law firm incentive design.

Asking that partner to annotate their deal experience into an AI knowledge base is, from their perspective, a perfectly rational act of competitive self-weakening. A mandate without incentive restructuring will not change that calculus. The mandate will be superficially complied with, some metadata contributed, some templates uploaded, and the actual tacit knowledge that would make the AI genuinely useful will remain locked in partners' heads and in the work product they guard carefully.

The firms that have meaningfully cracked this, and there is a small but growing cohort, including several large regional firms that have gone further than their Am Law peers in restructuring contribution incentives, have done something specific: they have added knowledge contribution as a measurable positive multiplier in partner performance reviews. Not as a punitive compliance requirement. As a tracked, credited behavior with visible upside. Partners who actively validate AI outputs, annotate matter outcomes, and surface practice-specific nuance into the firm's knowledge base are accumulating a form of institutional credit that shows up in compensation conversations. The reframe is critical: the AI amplifies your expertise rather than replacing it. Your annotations make the system smarter in your image. Your knowledge compounds rather than depletes. Thomson Reuters Institute's 2024 report on generative AI in law firms corroborated this directionally, finding that firms reporting the highest AI satisfaction scores were disproportionately those that had tied AI engagement to formal performance frameworks rather than relying on voluntary adoption alone.

This is not a technology configuration change. It is a governance decision. And it is the intervention that no AI vendor can make on your behalf.

The Last-Mile Gap That Kills Compounding Returns

Even at firms where the incentive architecture is improving, there is a second structural problem that consistently erodes AI ROI: the gap between AI-generated insight and active matter workflow.

Current enterprise legal AI platforms are genuinely excellent at retrieval, summarization, and first-draft generation. The ROI gap lives downstream of those capabilities, in the connection between an AI output and the matter management systems, document management environments, and billing workflows where actual legal work happens. When an associate receives AI-generated contract analysis and then manually transfers relevant portions into iManage, re-tags it to the matter, and six months later cannot find it again because the tagging taxonomy is inconsistent across practice groups, the friction cost has substantially eroded the time saving. The task was faster. The institutional learning was zero. The Legal Technology Survey Report published annually by the American Bar Association has consistently found that document management fragmentation remains the single most commonly cited barrier to technology ROI across firm sizes, a finding that has persisted through multiple generations of legal tech deployment and shows no sign of resolving on its own.

Agentic AI capabilities, systems that can execute multi-step workflows autonomously rather than simply responding to queries, are the 2025-2026 frontier precisely because they begin to close this gap. A well-configured agentic layer can not only surface relevant precedents but push tagged, structured knowledge artifacts directly into matter records, flag them for partner review, and ensure that closed-matter learnings are captured in a retrievable form before the matter team disperses. This is where AI stops being a faster research tool and starts being a genuine knowledge infrastructure.

But, and this is the constraint that many firms are discovering the hard way, agentic capabilities are only as good as the underlying data architecture they operate on. Firms with fragmented DMS environments, inconsistent matter numbering, siloed practice group databases, and legacy document repositories full of unstructured, untagged work product simply cannot unlock second-order AI returns. The agent has nowhere clean to write. The knowledge it captures cannot be reliably retrieved. The compounding never starts. Gartner's research on enterprise AI readiness has made a comparable point in adjacent industries, estimating that organizations with low data maturity capture less than 30 percent of the value available from AI investments relative to those with structured, governed data environments, a gap that widens as AI capabilities become more sophisticated rather than narrowing.

This is why the firms pulling ahead in 2026 are often not the ones with the most sophisticated AI platforms. They are the ones that made unglamorous investments in data hygiene, matter tagging consistency, and DMS governance 18 to 24 months ago, investments that looked like overhead at the time and now look like competitive infrastructure.

Three Structural Interventions That Change the Trajectory

The firms that are successfully navigating the second adoption curve are not doing it through better training programs or more persistent nudge communications. They are making three specific structural interventions:

  • Incentive alignment before mandate: As described above, partner performance reviews that include knowledge contribution as a positive measurable factor, not a punitive checkbox, but a genuine credit line. The firms doing this most effectively have tied it to outcome-proximate language: partners who contribute to the knowledge base are building a form of institutional leverage that benefits them specifically, not just the firm abstractly. This approach aligns with research by Ethan Bernstein and colleagues at Harvard Business School on transparency and performance in organizational settings, which found that making individual contributions visible within a firm, rather than aggregating them anonymously, meaningfully increases sustained participation in collective knowledge systems.
  • Practice-group-level AI workflow owners: Not a central IT deployment team. Not a firmwide innovation director operating from headquarters. Practice-specific "AI workflow owners," typically senior associates or counsel, who sit inside individual practice groups and bridge the gap between platform capability and the specific work patterns of that practice. A real estate finance group and a complex litigation group do not use AI the same way. The firms treating AI deployment as a uniform infrastructure rollout are consistently underperforming the firms that have embedded practice-specific human translators who know both the tool and the work. This model mirrors what MIT Sloan Management Review has described as the "fusion strategy" for AI integration, in which dedicated human translators between technology capability and domain-specific practice consistently outperform both fully centralized and fully decentralized deployment approaches.
  • Matter-opening integration, not just matter-execution integration: The highest-leverage AI touchpoint in a matter lifecycle is not during execution; it is at opening. Firms that have wired AI-driven precedent surfacing, conflict and complexity flagging, and relevant knowledge retrieval into the matter-opening process are capturing value before the first billable hour is recorded. Firms that deploy AI only during execution are using it as a drafting assistant. Firms that deploy it at opening are using it as a strategic input. The distinction compounds significantly over a portfolio of matters. Wolters Kluwer's 2024 Future Ready Lawyer survey found that firms reporting the strongest perceived AI ROI were significantly more likely to report using AI during matter intake and scoping phases than during drafting and review alone, lending empirical weight to what has largely been an intuitive argument among legal operations practitioners.

The Metrics Conversation You Need to Have

If your current AI ROI reporting is built around time-saved-per-task, query volume, and active user counts, you are measuring the right things for a month-three readout and the wrong things for a year-two strategic assessment.

The metrics that actually predict whether your firm is building a durable knowledge advantage are more difficult to instrument, but not impossible:

  • Decision influence rate: How often does AI-surfaced knowledge demonstrably change what a lawyer does next? This requires qualitative feedback loops, brief structured prompts at the end of AI interactions, but it distinguishes between AI that is consulted and AI that is consequential. Erik Brynjolfsson's work on measuring AI productivity in knowledge work, including research conducted through the Stanford Digital Economy Lab, has persistently argued that task-level time savings systematically undercount AI's value when decisions improve in quality and that firms relying exclusively on time metrics will consistently underinvest in AI's highest-value applications.
  • Knowledge recirculation rate: What percentage of closed-matter insights are being captured in retrievable, structured form and subsequently surfaced on new matters? This is the compounding metric. A firm with a high recirculation rate is getting smarter with every matter close. A firm with a low rate is starting from scratch every time.
  • Partner-level engagement depth: Not logins. Not query counts. Whether AI outputs are present in client-facing discussions, pitch materials, and strategic matter planning conversations. This is admittedly hard to measure precisely, but asking partners directly, as part of performance conversations, whether they are using AI-surfaced knowledge in client interactions is a revealing data point that most firms are not collecting. The Thomson Reuters Institute's State of the Legal Market report for 2024 noted that client-facing AI integration remains the most underdeveloped dimension of legal AI deployment, despite being the dimension that clients themselves cite most frequently as a differentiator when evaluating outside counsel.

The Compounding Clock

The stakes of getting this right, or wrong, are not symmetric, and the asymmetry is growing.

Every matter that closes at a firm with strong knowledge recirculation architecture adds to a proprietary institutional asset: a structured, AI-accessible record of what the firm knows, how it has approached similar problems, and what outcomes resulted. That asset cannot be purchased off a shelf. It cannot be replicated by a competitor deploying the same platform next year. It is the product of governance decisions and data discipline decisions being made right now, compounding quietly. This is precisely the dynamic that Andrew McAfee and Erik Brynjolfsson described in The Second Machine Age when they argued that in AI-augmented competition, the returns to early institutional learning accumulate nonlinearly, meaning that firms which internalize AI into their operating knowledge early do not simply stay ahead of later adopters; they pull progressively further ahead as the base of structured institutional knowledge grows.

The firms that delay solving the second adoption curve are not standing still. They are falling further behind the cohort that is actively building this asset, matter by matter, annotation by annotation, practice group by practice group.

By late 2027, the distance between those two cohorts will be visible in client outcomes, in pitch credibility, and in the institutional knowledge depth that distinguishes genuinely sophisticated counsel from commodity legal services. The AI-native law firm is not going to be built by the firm that bought the most advanced tools. It is going to be built by the firm that figured out, in 2026, that the tool was never the hard part.

The hard part is the architecture. And the window to build it advantageously is narrower than most firms currently appreciate.