← all hypotheses

Outside Counsel Citation Gate for Fractional GCs

graduated [TRIANGULATED] filter 10.0/15 spread ±1.0 signals: 2 independent
What is this?
A pre-send verification gate that fractional General Counsel run on the citations, statutory references, and authority quotations contained in AI-leveraged work product they receive from outside law firms — before they sign off, forward to client, or rely on it. The fractional GC pastes only claim-tuples (citation + asserted proposition + jurisdiction), never client-confidential matter content. AE's adversarial multi-model debate stress-tests each tuple against public legal-authority sources (Bailii, EUR-Lex, Westlaw exports, SEC filings, court dockets) and returns a grade per claim plus a per-firm reliability ledger that accumulates across clients. Why AE: adversarial council + scored retrieval + structured constraint language are precisely the machinery for tightly-scoped factual claim grading; objective ground truth lives in public databases the GC already subscribes to; resolution is minutes, not weeks. The fractional GC keeps the audit trail and a growing scorecard of which outside firms (and which AI tools they use) produce reliable authority.
Why did we consider it?
A confidentiality-safe, tuple-based citation verification gate that turns AE's adversarial graded debate into a compounding per-firm reliability ledger for the fractional GC's existing gatekeeping role.
What breaks?
  • Walled Garden API Block: Westlaw and LexisNexis strictly prohibit third-party AI scraping via personal credentials, cutting off the AE's access to ground-truth case law.
  • Catastrophic UX Friction: Manually extracting 'claim-tuples' from complex legal memos is a tedious, non-billable task that destroys the efficiency value proposition of a Fractional GC.
  • Misaligned Market Incentives: GCs use Outside Counsel Guidelines (OCGs) to shift liability for errors back to the law firm; they won't pay to assume the risk of grading them.
What did we learn?
Engine verdict: GATHER_MORE_SIGNAL (WORTH_SKIMMING). Real seam, real buyer pain, but semantic-verification gap and Bailii-only data ceiling make this premature to build.

Filter scores

Five axes, each scored 0-3. Three independent runs by different model perspectives. Median shown.

AxisWhat it measures
data moatDoes this product accumulate proprietary data that compounds?
10x model testDoes a better model make this more valuable, or redundant?
fast feedback loopsCan outputs be graded against reality in <30 days?
solo founder feasibleCan a solo operator build and run this without a team?
AI providers cant eat itDo hyperscalers have structural reasons NOT to build this?
Composite median: 10.0 / 15. Graduation threshold: 9.0. IQR across runs: 1.0.

Evidence

Signal B — Competitor with documented gap

GC AI markets itself as a 'secret superpower' for fractional GCs but operates as a public AI tool. Result [16] documents the core gap: 'Public AI for legal work? That's not innovation. That's a liability waiting to happen. Every client's matter lives in the same model.' No snippet from any result describes adversarial citation verification, claim-tuple grading against authoritative legal databases, or per-firm reliability scoring — the hypothesis's core value propositions are absent from existing tools surfaced in these results.

Signal D — Demand proxy

{"found":true,"summary":"Strong demand signals across news, LinkedIn, and Reddit: GCs are actively demanding transparency and accountability from outside counsel (result [6]); fractional GCs face specific AI-liability concerns around citation accuracy and data sovereignty (results [16], [22]); legal AI founders are fielding community questions about their tools' limitations (result [21]); and GCs at major tech companies are actively experimenting with AI for legal workflows (result [24]).","sources":["https://www.law.com/corpcounsel/2026/03/03/exasperated-gcs-finally-demand-transparency-value-…

Evaluation history

WhenStagePhase
2026-05-09 15:36deep_council_verdictgraduated
2026-05-09 15:35deep_claude_takegraduated
2026-05-09 15:33deep_90day_plangraduated
2026-05-09 15:31deep_riskgraduated
2026-05-09 15:29deep_distributiongraduated
2026-05-09 15:27deep_pricinggraduated
2026-05-09 15:26deep_moatgraduated
2026-05-09 15:24deep_buyer_simgraduated
2026-05-09 15:22deep_icpgraduated
2026-05-09 15:21deep_competitorgraduated
2026-05-09 15:19deep_market_realitygraduated
2026-05-09 15:12filter_scorescored
2026-05-09 15:06filter_scorescored
2026-05-09 15:00filter_scorescored
2026-05-09 14:55evidence_searchargument
2026-05-09 14:48audience_simulationargument
2026-05-09 14:42red_team_killargument
2026-05-09 14:36steelmanargument
2026-05-09 14:26genesisargument