← all hypotheses

Evidence-Linked Claim Register for Deal Diligence Teams

exhausted [S] filter 9.0/15 spread ±5.0 signals: 2 independent
What is this?
A pre-memo diligence QA system for boutique M&A and investment advisors that works from the evidence set, not memo text alone. The team uploads selected diligence materials—CIM, QoE, key contracts, management presentations, customer concentration schedules, and model outputs—and AE builds a structured claim register where each proposed memo assertion must be linked to source excerpts, document provenance, date freshness, and support state. Before a memo goes out, the system checks for unsupported promotions, premise-conclusion severing, concession laundering, stale evidence, and claims whose confidence exceeds the cited material. Output is not generic writing advice; it is a release gate showing which claims are promotable, must be softened, need additional sourcing, or should be killed. The feedback loop comes from later objective outcomes: confirmatory diligence findings, discovered exceptions, retrades, and post-close misses, which update claim patterns over time. This is a diligence workflow product with portable intelligence and explicit behavioral contracts, not a chatbot or generic RAG layer.
Why did we consider it?
Best case: this is a narrow, high-value diligence control product that converts evidence-review AI into memo-risk prevention, with objective feedback loops and a credible path to boutique advisory ARR.
What breaks?
  • Fatal AE mismatch: M&A outcomes (retrades, post-close misses) take months or years, completely breaking the AE's strict <24h feedback loop requirement.
  • Insurmountable InfoSec barrier: Boutique M&A firms handling MNPI will not upload unredacted QoEs and CIMs to a solo, part-time developer's infrastructure.
  • Misaligned value proposition: Real-world evidence shows diligence friction stems from multi-format reporting (IC vs. lender decks), not just claim provenance.
Fatal objection: This dies because the learning loop that is supposed to make it uniquely valuable is unlikely to exist in practice, leaving a replicable feature inside existing diligence platforms.
What did we learn?
Killed: move_cap_reached.

Filter scores

Five axes, each scored 0-3. Three independent runs by different model perspectives. Median shown.

AxisWhat it measures
data moatDoes this product accumulate proprietary data that compounds?
10x model testDoes a better model make this more valuable, or redundant?
fast feedback loopsCan outputs be graded against reality in <30 days?
solo founder feasibleCan a solo operator build and run this without a team?
AI providers cant eat itDo hyperscalers have structural reasons NOT to build this?
Composite median: 9.0 / 15. Graduation threshold: 9.0. IQR across runs: 5.0.

Evidence

Signal B — Competitor with documented gap

Existing VDRs like DealRoom, Datasite, Intralinks, Ansarada provide document storage, Q&A, checklists, and AI for risk insights/gaps, but lack a structured claim register linking memo assertions to evidence excerpts, provenance, freshness, and automated checks for unsupported claims, stale evidence, or overconfident assertions.

Signal D — Demand proxy

{"found":true,"summary":"Forum discussions on Reddit/HN highlight M&A diligence pains: manual processes, evidence vs. policy gaps, obsolescence in IT/cyber diligence, 'deal ready' financials issues, and post-close surprises from missed diligence items.","sources":["https://www.reddit.com/r/cybersecurity/comments/1qiy3p2/cybersecurity_due_diligence_for_acquisition","https://www.reddit.com/r/cybersecurity/comments/1ogrqh9/im_a_security_professional_who_worked_on_many","https://www.reddit.com/r/business/comments/1mk87ww/why_does_ma_still_feel_like_a_black_box_unless","https://www.reddit.com/r/Acc…

Evaluation history

WhenStagePhase
2026-04-21 22:20evidence_searchranked
2026-04-21 22:00evidence_searchranked
2026-04-21 21:40evidence_searchranked
2026-04-21 21:20evidence_searchranked
2026-04-21 21:00evidence_searchranked
2026-04-21 20:40evidence_searchranked
2026-04-21 20:20evidence_searchranked
2026-04-21 20:00evidence_searchranked
2026-04-21 19:30evidence_searchranked
2026-04-21 19:00evidence_searchranked
2026-04-21 18:40evidence_searchranked
2026-04-21 18:20evidence_searchranked
2026-04-21 18:00evidence_searchranked
2026-04-21 17:40evidence_searchranked
2026-04-21 17:20evidence_searchranked
2026-04-21 17:00evidence_searchranked
2026-04-21 16:40evidence_searchranked
2026-04-21 16:20evidence_searchranked
2026-04-21 16:00evidence_searchranked
2026-04-21 15:40evidence_searchranked
2026-04-21 15:20evidence_searchranked
2026-04-21 15:00evidence_searchranked
2026-04-21 14:40evidence_searchranked
2026-04-21 14:20evidence_searchranked
2026-04-21 14:00evidence_searchranked
2026-04-21 13:40evidence_searchranked
2026-04-21 13:20evidence_searchranked
2026-04-21 13:00evidence_searchranked
2026-04-21 12:40evidence_searchranked
2026-04-21 12:20evidence_searchranked
2026-04-21 12:00evidence_searchranked
2026-04-21 11:40evidence_searchranked
2026-04-21 11:10evidence_searchranked
2026-04-21 10:40evidence_searchranked
2026-04-21 10:00evidence_searchranked
2026-04-21 09:20evidence_searchranked
2026-04-21 08:50evidence_searchranked
2026-04-21 08:20evidence_searchranked
2026-04-21 07:40evidence_searchranked
2026-04-20 00:30fatal_objectionranked
2026-04-20 00:20fatal_objectionranked
2026-04-19 19:50filter_scorescored
2026-04-19 19:40filter_scorescored
2026-04-19 19:30filter_scorescored
2026-04-19 19:20evidence_searchevidence_hunt
2026-04-19 19:10evidence_searchargument
2026-04-19 19:00audience_simulationargument
2026-04-19 18:50red_team_killargument
2026-04-19 18:40steelmanargument
2026-04-19 18:30genesisargument