Skip to main content
← Go/No-Go (full guide)
Spoke · Checklist · 8 min read

Go/No-Go criteria checklist — 7 weighted

The 7 criteria broken down into specific sub-questions and red-flag patterns. Use it as a self-check before the meeting, or hand it to attendees as their scoring rubric.

Each criterion has a weight (Critical / High / Medium), 4 sub-questions to force specificity, and 3–4 red-flag patterns to watch for. If you can\'t answer the sub-questions clearly, the score is 1, not 3.

1

Is the problem real?

Critical

✓ Sub-questions

  • Can I name 5+ specific people who have this problem?
  • Have they experienced it in the last 30 days?
  • Can they describe it without me prompting?
  • Have they tried at least one other way to solve it?

⚠ Red flags

  • !Only hypothetical "people would want this"
  • !Problem is real but mild — they'd use a fix but won't pay
  • !Customers describe a different problem than you do
2

Does a market exist?

Critical

✓ Sub-questions

  • TAM ≥ $1B (or vertical-justified $200M+)?
  • SAM gives a path to first 100 customers?
  • Top-down and bottom-up sizing within 5x of each other?
  • Market is growing or stable, not declining?

⚠ Red flags

  • !TAM cited from one report without segment narrowing
  • !No SOM calculation — only TAM
  • !Adjacent markets cannibalizing the segment
  • !Regulatory headwinds shrinking the market
3

Will customers pay for your solution?

Critical

✓ Sub-questions

  • Have prospects pre-ordered, deposited, or signed LOIs?
  • Have any booked calendar slots for follow-ups?
  • What are they currently spending to solve this?
  • Is there budget authority or escalation path?

⚠ Red flags

  • !Verbal interest only ("send me a link")
  • !Customers love it but PMs control budget — you didn't talk to PMs
  • !Price-sensitive at the level you need to charge
  • !Free alternatives exist that meet 80% of need
4

Does your team have the right capabilities?

High

✓ Sub-questions

  • Domain expertise on the team?
  • Technical execution capability?
  • Distribution/sales capability?
  • If gaps exist, hiring plan with realistic timeline?

⚠ Red flags

  • !Major capability gap with "we'll figure it out"
  • !Founder lacking domain expertise in regulated industry
  • !No sales/distribution skill in B2B context
  • !Single point of failure on key technical capability
5

Do the unit economics work?

High

✓ Sub-questions

  • CAC : LTV ratio of at least 1:3?
  • Payback period under 12 months?
  • Gross margin > 50% (or vertical-justified)?
  • Defensible thesis even with rough numbers?

⚠ Red flags

  • !No model at all — "we'll figure pricing later"
  • !CAC unknown or assumed at $0
  • !Margin compressed by per-user infrastructure costs
  • !Pricing copied from competitor without margin analysis
6

Do you have enough runway?

High

✓ Sub-questions

  • Months of cash divided by realistic time-to-validation?
  • Buffer for unexpected delays (multiplier of 1.5–2x)?
  • If venture-funded, do investors expect milestones in this window?
  • If bootstrapped, can revenue cover burn during validation?

⚠ Red flags

  • !Less runway than realistic time-to-validation
  • !Counting on next round to extend runway
  • !No revenue path within current runway
  • !Personal financial situation forces premature decisions
7

What are the disqualifying risks?

Medium

✓ Sub-questions

  • Regulatory exposure named and mitigation planned?
  • Platform dependency under 50% of revenue/distribution?
  • Legal exposure (IP, data, employment) reviewed?
  • Single-point-of-failure technology identified?

⚠ Red flags

  • !Unknown regulatory landscape in regulated industry
  • !Single platform = 90% of revenue (Apple, Google, single API)
  • !IP question unresolved
  • !Compliance costs not modeled into unit economics

Decision rule (set this before scoring)

  • GO — average ≥ 4.0 across critical criteria AND no critical criterion below 3.
  • WAIT — 3.0–3.9 average WITH a named experiment + deadline + re-decision date.
  • NO-GO — below 3.0 average OR any critical criterion at 1–2.

Lock the threshold in pre-read. Renegotiating it after seeing scores is how teams talk themselves into bad decisions.

Score your idea in 30 minutes

GoNoGo runs a structured first-pass through all 7 criteria — voice intake, market sizing, scored output. The result is your honest baseline before the real Go/No-Go meeting.

Run the 7-criteria check free →

30 min · up to 25 reports

Frequently asked questions

Why are some criteria weighted higher than others?+
Critical criteria (problem real / market / will pay) are non-negotiable — failing any of them kills the initiative regardless of other strengths. High-weight criteria (team / unit econ / runway) can be partially compensated for. Medium-weight criteria are tiebreakers when others are borderline. Treating all criteria equally would let you pass Go/No-Go on something that fails the only questions that actually matter.
What if I don't have data for a criterion?+
Score 1 ("no data"). That IS a valid score. The trap is skipping criteria you're not confident in — that corrupts the average. If 3 critical criteria score 1 because you don't have data, the answer is NO-GO until you can validate them. Lack of evidence is itself the answer.
How specific should sub-question answers be?+
Specific enough that a stranger reading them could tell whether you actually have evidence or you're bluffing. "Customers want this" is bluff. "5 prospects (CTOs at 50–200 person SaaS companies) confirmed in interviews this week, 3 offered to be design partners" is evidence. The sub-questions exist to force specificity.

More on this topic