← all hypothesesBid Readiness QA for Public-Sector and Research Proposal Teams
graduated [A] filter 10.5/15 spread ±0.5 signals: 2 independent
What is this?
AE becomes an upstream bid-readiness system for specialist proposal teams, not a last-minute grant gate. The product is used days or weeks before submission to pressure-test a proposal architecture pack: problem statement, intervention logic, evidence base, delivery plan, milestones, risks, budget narrative, and explicit criterion mapping. Instead of claiming to predict funding decisions, it produces a structured red-team audit using AE's six failure patterns plus scored constraint checks and promotion/demotion rules for claims. The fast feedback loop comes from objective internal resolution cycles: unresolved evidence gaps, broken criterion coverage, unsupported outcomes, timeline inconsistencies, and risk sections that fail mitigation standards can all be graded and re-graded within 24 hours. Over time, teams build a portable library of winning argument structures and recurring failure modes across bids. This is a QA and governance product for proposal development discipline, not a magical win-rate oracle tied to slow, political grant outcomes.
Why did we consider it?
AE has a credible wedge as an upstream bid-readiness QA system because it formalizes a proven pre-submission review process into fast, objective, repeatable governance for high-value public-sector and research proposals.
What breaks?
- Abandonment of Objective Reality: Shifting from reality-graded predictions to 'internal resolution cycles' reduces AE to a commodity LLM-as-a-judge document checker, losing its core competitive moat.
- Inability to Simulate Reviewer Subjectivity: Real-world grant rejections (per Killen and Granted AI) hinge on subjective panel dynamics, unstated agency priorities, and PI credibility, which internal QA cannot accurately grade.
- Lethal Go-To-Market Mismatch: Selling enterprise 'governance' software to public-sector and university procurement departments involves 9-18 month sales cycles, making the £100-300K ARR target impossible for a part-time solo founder.
What did we learn?
Engine verdict: ESCALATED (MUST_READ). Council could not converge after 3 rounds — human decision required
Filter scores
Five axes, each scored 0-3. Three independent runs by different model perspectives. Median shown.
| Axis | What it measures |
|---|
| data moat | Does this product accumulate proprietary data that compounds? |
| 10x model test | Does a better model make this more valuable, or redundant? |
| fast feedback loops | Can outputs be graded against reality in <30 days? |
| solo founder feasible | Can a solo operator build and run this without a team? |
| AI providers cant eat it | Do hyperscalers have structural reasons NOT to build this? |
Composite median: 10.5 / 15. Graduation threshold: 9.0. IQR across runs: 0.5.
Evidence
Signal A — Primary source
Compliance Auditing & Quality Assurance Services Framework
Signal D — Demand proxy
{"summary":"There are adjacent demand proxies showing activity around proposal/RFP analysis and grant-writing tooling, including an RFP auditing discussion on Reddit and small open-source proposal-analysis tools on GitHub.","sources":["https://www.reddit.com/r/Rag/comments/1rmcpcg/how_do_i_make_retrieval_robust_across_different.json","https://github.com/FritscheLab/grant-language-preflight","https://github.com/aadrikasingh/AI-Powered-RFP-Analyzer"]}
Evaluation history
| When | Stage | Phase |
|---|
| 2026-04-19 15:42 | deep_council_verdict | graduated |
| 2026-04-19 15:19 | deep_claude_take | graduated |
| 2026-04-19 15:17 | deep_90day_plan | graduated |
| 2026-04-19 15:07 | deep_risk | graduated |
| 2026-04-19 14:58 | deep_distribution | graduated |
| 2026-04-19 14:49 | deep_pricing | graduated |
| 2026-04-19 14:39 | deep_moat | graduated |
| 2026-04-19 14:33 | deep_buyer_sim | graduated |
| 2026-04-19 14:26 | deep_icp | graduated |
| 2026-04-19 14:16 | deep_competitor | graduated |
| 2026-04-19 14:07 | deep_market_reality | graduated |
| 2026-04-19 13:50 | filter_score | scored |
| 2026-04-19 13:40 | filter_score | scored |
| 2026-04-19 13:30 | filter_score | scored |
| 2026-04-19 13:20 | evidence_search | argument |
| 2026-04-19 13:10 | audience_simulation | argument |
| 2026-04-19 13:00 | red_team_kill | argument |
| 2026-04-19 12:50 | steelman | argument |
| 2026-04-19 12:40 | genesis | argument |