Between 2023 and 2025, large law firms deployed enterprise AI at a pace that outran their governance instincts. The procurement logic was sound enough: sign an agreement with Harvey, Luminance, or CoCounsel; complete the vendor's onboarding checklist; designate an internal AI Champion; point to the admin dashboard when anyone asked about oversight. It looked like governance. It produced reports. It had user-level logs. It felt, in the moment, like a responsible institutional response to a fast-moving technology.

It was not governance. It was the appearance of governance, built on infrastructure the firm does not own, cannot export on demand, and will lose access to the moment it changes vendors or the vendor changes its business model. The ABA is about to make that distinction disciplinary, not merely academic.

What the Draft Rule Language Actually Demands

The anticipated Model Rule on AI supervision builds directly on Rules 5.1 and 5.3 and on the foundation laid by ABA Formal Opinion 512 in 2023, which addressed competence and supervisory obligations in the context of generative AI. The forthcoming rule is expected to go further: it will likely formalize that a supervising attorney must be able to affirmatively demonstrate that AI-assisted work product was reviewed, that the basis for the AI's output was understood, and that the firm maintained records sufficient to reconstruct supervisory decisions after the fact.

The phrase appearing in circulating draft frameworks deserves close reading: "reasonable measures to ensure the technology's outputs are explainable, auditable, and subject to the firm's direct oversight." That word, direct, is carrying significant legal weight. It is not asking whether the firm has access to a vendor's dashboard. It is asking whether the firm independently controls the documentation of its own supervisory process.

State bar guidance has been moving steadily in this direction. California's 2023 guidance placed supervisory responsibility squarely on the attorney and firm. Florida's 2024 guidance reinforced that AI tool selection does not transfer professional obligation. New York's guidance through 2024 and 2025 has progressively tightened the locus of accountability onto the firm rather than the technology provider. The ABA rule, when it finalizes, will not be a departure from this trajectory. It will be its consolidation.

The architectural implication is direct: a vendor's internal logging system, even a sophisticated and well-maintained one, does not satisfy "direct oversight" if the firm cannot independently access, query, export, or preserve those logs outside the vendor relationship. The rule does not care how good Harvey's dashboard is. It cares whether the firm owns the record.

The Operational Log Is Not a Supervisory Audit Trail

This is the distinction that ethics partners need to carry into their next conversation with their CTO, and it is one that procurement processes systematically failed to draw.

An operational log records that a query was made and an output was returned. It may include timestamps, user IDs, matter tags, and token counts. Vendor dashboards are designed to produce exactly this kind of data, because it serves vendor-side purposes: billing reconciliation, usage analytics, product improvement, and customer success reporting. These are legitimate functions. They are simply not the function of a supervisory audit trail.

A supervisory audit trail records something categorically different: that a qualified attorney reviewed a specific output, assessed its basis, identified its limitations, and made an affirmative professional judgment about its use in client matters. No vendor builds this by default, because it is not their responsibility to build it. It is the firm's responsibility, and firms systematically offloaded it by assuming that the vendor's operational record would serve both purposes.

Consider the specific scenario that ethics partners are already modeling. A partner uses Harvey to draft a summary judgment argument. Harvey returns a confident output with accurate case citations. The partner reviews the filing, finds the citations correct, and submits. Six months later, a disciplinary inquiry arrives: what supervisory process ensured the legal reasoning was sound, not merely that the citations existed? The vendor log shows the query. It shows the output. It shows the submission timestamp. It cannot show, because it was never designed to show, the attorney's supervisory reasoning process. Under the forthcoming rule, that documentation gap is not a minor administrative deficiency. It is a potential Rule violation.

The Contract Clause That Will Haunt Managing Partners

There is a second architectural problem embedded in the enterprise agreements most firms signed in 2023 and 2024, and it is receiving less attention than it deserves.

Standard vendor agreements for Harvey, Luminance, CoCounsel, and comparable platforms typically treat log data, usage metadata, and model interaction histories as vendor-owned or vendor-controlled assets. This is not unusual SaaS contract language; it is the industry norm. The problem is that law firms have professional retention obligations that extend well beyond the typical vendor contract lifecycle. For disciplinary purposes, depending on jurisdiction, the relevant record may need to be producible six to ten years after the matter closed.

The scenario that should be keeping managing partners awake is not abstract. A disciplinary complaint is filed in 2027 referencing AI-assisted work product produced in 2024. The firm's Harvey contract lapsed in 2026 when it moved to a different platform. The audit trail, such as it is, lives in Harvey's system. Harvey's legal team is professional but not obligated to reconstruct records for a former customer's bar proceeding on any timeline the firm controls. The firm cannot demonstrate supervisory compliance for work it billed several million dollars to produce.

The analogy to e-discovery sanctions cases is instructive. Firms have lost significant spoliation battles because litigation hold obligations conflicted with SaaS vendor data retention schedules: the firm believed the data was preserved, but the vendor's contractual retention window had already closed. The structural problem is identical here. Professional obligation schedules and vendor data retention schedules are written by different parties with different interests, and firms that did not resolve that conflict at the contract stage will discover it during a disciplinary inquiry.

Any firm currently operating under an enterprise AI agreement signed before 2026 should be reviewing that agreement now for three specific provisions: who owns the log data, what happens to that data if the contract terminates, and whether the firm can export its full supervisory record in a usable format on demand. Many firms will find that the answers to at least one of these questions are unsatisfactory.

Explainability Is Not a Feature You Can License

The explainability requirement in the anticipated rule is its sharpest edge, and it is the one most likely to catch technically sophisticated firms off guard, because they have been solving the wrong version of the problem.

Vendors have invested heavily in explainability proxies: confidence scores, citation links, prompt transparency features, and reasoning summaries. These are genuinely useful. They help attorneys assess outputs more quickly and identify potential errors. But they are proxies for explainability infrastructure, not the infrastructure itself. The distinction matters legally.

Explainability under the forthcoming rule means an attorney can articulate, in sufficient terms to defend a professional judgment, why a specific output was produced and why the attorney's supervisory review was adequate given that output's basis and limitations. The foundation model underlying Harvey or CoCounsel does not produce attorney-interpretable reasoning chains natively. What the vendor UI presents as an explanation is a designed representation of a probabilistic process, not a transparent account of it.

The firm-level explainability requirement therefore demands something no vendor can supply as a licensing feature: the prompting standards the firm used, the output review protocols the matter team followed, and the documentation connecting AI output to attorney judgment at the matter level. If those elements exist only inside a vendor's interface, they are not the firm's to produce in a disciplinary proceeding. The firm must own this layer, version it, retain it, and be able to export it independently.

What the Architecture of Compliant Firms Actually Looks Like

A small cohort of firms built this correctly from the beginning. They are worth examining not as exceptional cases but as templates, because what they built is now simply what the rule requires.

The common characteristics are consistent. These firms maintain matter-level AI use logging in their own document management or practice management systems, independent of any vendor platform. They operate standardized, firm-owned prompt libraries that are versioned and retained like any other firm intellectual property. They require output review checklists that generate attorney-signed supervisory records attached to the matter file. They run scheduled export routines from vendor systems into firm-controlled repositories governed by the firm's own retention schedules rather than the vendor's.

The result is a governance layer that is vendor-agnostic by design. When the firm migrates from one AI platform to another, the supervisory record persists in firm-controlled infrastructure. When a disciplinary inquiry arrives, the firm produces its own documentation through its own systems. It does not file a customer support request with a former vendor and wait.

This architecture is not technically complex. It does not require building proprietary AI infrastructure. It requires recognizing that the governance function belongs to the firm and investing accordingly: in DMS integrations, in protocol documentation, in retention policy alignment, and in the organizational discipline to treat AI supervisory records as professional records rather than vendor logs.

The Window Is Narrower Than It Appears

The rule is expected to finalize in mid-2026. State adoption cycles will begin immediately in jurisdictions that move quickly: California, New York, and Illinois are the most likely first movers, and all three have substantial AmLaw 200 firm presence. The window between now and the first disciplinary actions under the new framework is real, but it is not generous.

The priority actions are specific. First, audit every active vendor agreement for data ownership provisions, log portability clauses, and post-termination data access rights. Second, establish firm-controlled supervisory documentation protocols that function independently of any single vendor relationship, before the rule finalizes and certainly before any contract lapses. Third, define operationally what explainability means at the matter level for each AI use case the firm has deployed: draft review, due diligence, research, contract analysis. Each use case has a different explainability profile and requires a different documentation standard. Fourth, construct retention schedules for AI supervisory records that satisfy the most demanding jurisdiction in which the firm operates, and verify that vendor data retention schedules are either aligned with or superseded by those firm schedules.

Firms that complete this work in the next two quarters will be governing proactively. Firms that wait for the rule to finalize, then wait for state adoption, then wait for the first complaint to test the framework, will be doing crisis governance, and the distinction between the two is not merely operational. It is, increasingly, the difference between a firm that can demonstrate supervisory compliance and a firm that cannot.

The ABA is not changing what supervision means. It is requiring firms to prove they have been doing it. The firms that handed that proof to their vendors in 2023 have work to do.