← all hypothesesForecast QA for In-House Growth Teams
graduated [A] filter 10.5/15 spread ±1.0 signals: 2 independent
What is this?
An async decision-assurance service plus lightweight software for in-house growth and performance marketing teams that regularly propose channel forecasts, experiment bets, and budget-shift recommendations. Teams submit planned claims in structured form — for example, 'new creative theme will improve CTR 10-15% within 2 weeks' or 'landing-page simplification will reduce CAC this month' — and AE stress-tests them before budget is committed. The output is a decision memo, not a client narrative: which claims are grounded, which rely on hidden assumptions, which exhibit AE's six failure patterns, what evidence window is actually valid, and whether each claim should be promoted, demoted, or killed. Over time, AE grades forecast quality against real outcomes, building a reality-based record of which reasoning patterns and operators are trustworthy. The buyer is a VP Growth, Head of Performance, or marketing lead who needs fewer bad bets, cleaner postmortems, and a defensible planning process across paid media, creative, and landing-page experimentation.
Why did we consider it?
AE has a credible wedge as the pre-commitment QA layer for growth forecasts: it improves the quality of decisions entering the plan, where existing AI forecasting tools largely do not operate.
What breaks?
- Velocity mismatch: Growth teams rely on cheap, live A/B tests rather than theoretical pre-commitment debates; a 24-hour async memo is a bottleneck, not a feature.
- Attribution chaos: 'Objective reality-graded signal' is impossible in modern performance marketing where external variables (competitor spend, algorithmic shifts, tracking limitations) obscure true causality.
- Integration bottleneck: Tracking real-world outcomes to grade predictions requires deep, custom integrations with fragmented ad and analytics stacks, which is unfeasible for a part-time solo founder.
What did we learn?
Engine verdict: GATHER_MORE_SIGNAL (WORTH_SKIMMING). Real whitespace, but no proof yet that growth leaders will adopt and enforce a pre-spend QA ritual.
Filter scores
Five axes, each scored 0-3. Three independent runs by different model perspectives. Median shown.
| Axis | What it measures |
|---|
| data moat | Does this product accumulate proprietary data that compounds? |
| 10x model test | Does a better model make this more valuable, or redundant? |
| fast feedback loops | Can outputs be graded against reality in <30 days? |
| solo founder feasible | Can a solo operator build and run this without a team? |
| AI providers cant eat it | Do hyperscalers have structural reasons NOT to build this? |
Composite median: 10.5 / 15. Graduation threshold: 9.0. IQR across runs: 1.0.
Evidence
Signal B — Competitor with documented gap
Recast is positioned around marketing mix modeling / incrementality systems rather than async pre-commit forecast QA that stress-tests specific growth claims and grades reasoning quality over time. Its own framing implies existing analytics are insufficient for decision quality.
Signal D — Demand proxy
{"summary":"Reddit discussions show marketers actively seeking ways to estimate campaign economics, adjust budgets automatically, and determine what marketing is actually working, which is consistent with demand for better forecast validation and decision support.","sources":["https://www.reddit.com/r/marketing/comments/1r7d1tq/what_marketing_is_actually_working_for_you_in_2026/","https://www.reddit.com/r/marketing/comments/jhsi6k/is_there_any_marketing_solution_to_gauge_campaign.json"]}
Evaluation history
| When | Stage | Phase |
|---|
| 2026-04-19 06:39 | deep_council_verdict | graduated |
| 2026-04-19 06:33 | deep_claude_take | graduated |
| 2026-04-19 06:31 | deep_90day_plan | graduated |
| 2026-04-19 06:22 | deep_risk | graduated |
| 2026-04-19 06:14 | deep_distribution | graduated |
| 2026-04-19 06:02 | deep_pricing | graduated |
| 2026-04-19 05:53 | deep_moat | graduated |
| 2026-04-19 05:47 | deep_buyer_sim | graduated |
| 2026-04-19 05:34 | deep_icp | graduated |
| 2026-04-19 05:25 | deep_competitor | graduated |
| 2026-04-19 05:07 | deep_market_reality | graduated |
| 2026-04-19 04:50 | filter_score | scored |
| 2026-04-19 04:40 | filter_score | scored |
| 2026-04-19 04:30 | filter_score | scored |
| 2026-04-19 04:20 | evidence_search | evidence_hunt |
| 2026-04-19 04:10 | evidence_search | argument |
| 2026-04-19 04:00 | audience_simulation | argument |
| 2026-04-19 03:50 | red_team_kill | argument |
| 2026-04-19 03:40 | steelman | argument |
| 2026-04-19 03:30 | genesis | argument |