The Scoring Model Trap
“Why most investment scoring systems create false confidence and miss the signals that matter.”
Every investor eventually builds a scoring model. It starts innocently: you want consistency, you want to codify your criteria, you want to explain your decisions to LPs.
Then you realize the model is making you worse at investing.
The Appeal of Quantification
Scoring models promise to solve real problems:
Consistency. Apply the same criteria across all opportunities.
Speed. Evaluate more companies in less time.
Defensibility. Justify decisions with objective-looking data.
But these benefits rest on a dangerous assumption: that the most important factors in investment success can be reduced to quantifiable criteria.
What Gets Measured Gets Gamed
As soon as founders understand your scoring rubric, they optimize for it. This doesn't mean they're building better companies—it means they're getting better at scoring well.
Metrics theater. Founders emphasize the metrics you care about, whether or not they predict actual success.
Narrative optimization. Pitches get refined to hit your scoring triggers, obscuring the messy reality of building something new.
Selection bias. You start seeing more companies that match your model's preferences, not because they're better, but because they've learned to present themselves that way.
The model creates a feedback loop that validates itself while missing outlier opportunities.
The Illusion of Precision
A company scores 8.2 out of 10. What does that mean?
It means you've taken subjective judgments (Is this market big enough? Is the team strong enough?) and converted them into numbers that look precise but remain fundamentally subjective.
The problem isn't the subjectivity—it's that the numerical output masks it. You feel more confident in your 8.2 than in your original gut sense that "this seems promising but risky."
False precision kills good deals. When a great opportunity scores 7.8 (below your threshold of 8.0), you pass—not because you disagree with the investment, but because the model says so.
Pattern matching fails for outliers. The best investments often break the pattern. They wouldn't have scored well on the criteria that worked for previous winners because they're doing something fundamentally different.
What Models Miss
The factors that most reliably predict investment outcomes resist quantification:
Founder conviction. Is this person building this company because they have to, or because it seemed like a good opportunity? You can't score this—you have to feel it through conversation.
Narrative coherence. Does their story about why now, why this, why them hold together under pressure? This requires judgment about logic, adaptability, and self-awareness.
Team chemistry. Do the co-founders complement each other, or will they fracture under stress? This emerges through observation, not data.
Strategic clarity. Can they articulate their 2-3 most important priorities and explain why everything else doesn't matter right now? This is a signal of focus, not just intelligence.
These aren't unmeasurable because we lack the right metrics—they're unmeasurable because they're fundamentally about human judgment in context.
The Alternative: Decision Frameworks
Instead of scoring models, use decision frameworks—structured ways of thinking that preserve judgment rather than replacing it.
Key questions over key metrics. Instead of scoring "market size" on a 1-10 scale, ask: "What needs to be true about this market for this investment to work?" This forces specific thinking without false precision.
Devil's advocate check. For every investment thesis, explicitly write the bear case. If you can't articulate why this could fail compellingly, you don't understand it well enough.
Decision criteria clarity. What are your actual deal-breakers? Rank them. But resist the urge to assign weights and calculate scores—just use them as gates, not scores.
Post-decision review. Track not just outcomes, but the quality of your decision process. Did you consider the right questions? Were you rigorous about edge cases? Did you override good judgment because of the model?
When Models Work
Scoring models aren't always bad—they're bad when used for the wrong purpose.
Use models for screening, not deciding. Models can help you prioritize which opportunities deserve deep attention. They're terrible at making final investment decisions.
Use models for known domains, not novel ones. If you're investing in a mature category where success factors are well-understood, models work. If you're investing in something new, they don't.
Use models as input, not output. Let quantitative analysis inform your judgment, but don't let it replace judgment.
Better Decision-Making
The goal isn't to eliminate structure—it's to build structure that amplifies judgment rather than replacing it.
Ask: Does this tool help me think more clearly, or does it let me avoid thinking?
If your scoring model gives you confidence without improving your understanding, it's making you worse.
Share this insight
Orientation
Continue the thinking