Loading latest insights...
Loading categories...
Stop Chasing Users, Start Measuring Decisions.
Here’s the deal.
Most “product–market fit” (PMF) thinking comes from a world of static apps and predictable features. AI products don’t live there. They’re probabilistic, they keep changing as models improve, and users judge them on trust, not just “wow” moments.
So in the AI era, PMF is less about “Do people like my app?” and more about “Do people trust this AI enough to let it change how they decide and work?”
The classic idea from Marc Andreessen: PMF is when a good market pulls a product out of you so fast you can barely keep up. In practice, people use proxies like:
That still matters. But with AI, there are three extra layers:
Is the AI living where people actually work?
Are better decisions happening because of it?
Do people believe the system will behave reasonably most of the time?
When AI Moves the Needle
That is AI PMF: Clear problem (volume), clear movement in metrics (cost/speed).
Twist: Over-optimizing cost led to quality concerns later. PMF is a dynamic balance.
Embedded in Habits
Notion didn’t launch a separate “AI app.” They injected AI into the workspace millions already lived in. Users aren't "trying Notion because of AI"—they are staying longer because AI reduces friction.
"PMF = Do people use the AI as part of their daily workflow without thinking 'this is AI'?"
Speed & Satisfaction
PMF here shows up as faster cycle time and better developer experience. However, strong PMF often comes with pushback (worry about code quality/slop). You want a net positive on speed + quality.
• Does the AI behave consistently across user segments and edge cases?
• Are there clear guardrails for high-risk actions?
• Is data usage aligned with privacy expectations?
If you’re writing a PRD for an AI feature, don’t just write:
“User can chat with an AI assistant to answer questions.”
Instead, Ask:
1. If our AI disappeared tomorrow...
What specific decisions or workflows would get noticeably worse? (If "nothing", you don't have PMF).
2. Which metric proves it's not a demo?
Time-to-resolution, error rate, NPS specifically for AI.
3. Where will trust break first if we 10x?
Support quality? Edge cases? Privacy? Regulatory pressure?
“We have users whose work and decisions are materially better because they trust our AI system enough to rely on it every day.”
— That’s the bar.