The best consumer founders know the game. 30K+ of them read Consumer Startups every week.

Stay ahead. Get the playbook behind today's breakout startups.

Read time: 6 mins 13 seconds

Happy new year 🎊

Last week, I teamed up with Will and Mac from Ascend to share our top 8 consumer startup ideas for 2026. Loved all the great responses from you all. It was clear that a lot of you are looking for more concrete insights you can actually act on.

The most reliable way to get there is by talking directly to your target customers. Most founders know this in theory. What’s much less obvious is how to do it well at each stage of a startup, and how to separate real signal from noise as you go.

I asked Will and Mac to come back again and share their customer research playbook. They’ve helped teams like BeReal, Voodoo, and Coverd run 5,000+ customer interviews, and they’re breaking down how they approach research from idea stage all the way through growth, so you keep building the right things instead of guessing.

🔥 If you’re looking to level up your customer research game, connect with Will and Mac on LinkedIn or book a call for a free consultation here.

Every founder preaches customer obsession. It's in the investor decks. It's in the company values. It's in every podcast interview.

Then reality kicks in.

Research starts strong, then quietly dies as soon as there's money to raise, product to ship, and people to hire. Customer obsession becomes an "important but not urgent" item on the Eisenhower matrix, and it never gets prioritized the way it should.

35% of startups fail because there is no market need for what they build. Companies that ignore customer feedback fail at much higher rates than those that don’t. 

In 2026, this matters more than ever. Product-market fit is not something you find once and lock in. Elena Verna said on Lenny’s Podcast that teams now need to re-find product-market fit every few months because both the underlying technology and customer expectations are changing so fast.

If you are not close to users, someone else will be. Often, that someone is a smaller team with fewer resources but a sharper understanding of the problem.

The fix does not require a big research org or expensive tools. It requires a system. Better to build it early and reap the benefits forever. 

Today, we are sharing a stage-by-stage playbook for consumer research. What to prioritize, how to do it, and what mistakes to avoid at each phase of your startup's journey.

The one-minute version

1/ Pre-idea stage: validate the pain, not the idea

Primary goal: Build consensus that the underlying pain point is real.

At this stage, your only job is to figure out whether the problem you think actually exists, and whether it's painful enough for people to change their behavior.

People are usually too polite, and they don’t want to crush your dreams. If you describe an idea, they will say it sounds interesting. That tells you nothing.

Instead, separate the research from your business entirely. Don’t pitch.

Ask about their current behavior, recent moments, and their current workarounds.

If you are exploring a Gen Z personal finance product, ask questions like:

  • How do you currently track your finances?

  • Do you feel like you have a problem tracking your finances?

  • What tools have you tried? What worked and what didn't?

  • When was the last time you felt like you really had a handle on your money?

  • Is this a point of stress for you? When and where do you feel that stress?

There should be no feature discussion or solution framing. You are validating the severity of the pain/problem you are targeting and not your idea.

The benchmark: Aim for 10+ conversations where you hear the same pain point organically, without leading people toward it. If you can't build consensus around the problem, you don't have a business yet.

Common mistake: Founders get excited about their solution and start pitching too early. The moment you pitch, you lose the ability to get honest signals about the underlying need.

Case study: One dating app startup, Dataing, learned this the hard way. They spent nearly 18 months bootstrapping an MVP built around a core assumption that never got properly pressure-tested: that users would be comfortable sharing large amounts of personal data upfront in exchange for better matches.

The product relied on deep onboarding, granting access to personal accounts that most dating apps never ask, e.g. your entire photo album for processing. The team assumed users would opt in willingly because the value proposition felt obvious to them.

The problem was in how they validated that assumption. Early customer research focused on hypothetical interest rather than real behavior. Users said they liked the idea in theory, but the team didn’t test where hesitation actually appeared in the flow or how many people would drop off when asked to share sensitive data immediately. 

The founders later admitted they’d been too close to the product and had lost perspective. Once they grounded their decisions in real user pain points rather than surface-level feedback, they reworked onboarding to stage data requests over time and shifted core product decisions toward gradual trust-building instead of immediate depth.

The result was a course correction in weeks, not months, giving them clarity and confidence heading into their seed round. 

2/ Pre-seed and seed stage: usability testing and prioritization

Primary goal: Understand how people actually use what you've built and which features to prioritize next.

Once you've raised some money and shipped a product (even in TestFlight), the research question changes.

Now you need to know:

  • Can people use this without help?

  • Do they get value fast?

  • What should you build next?

The benchmark: You should be speaking with a user every day on average. Ideally, your team is doing 5-20 interviews per week. At this stage, you're dealing with investors' money, your team's time, and the survival of the company. Building consensus around product direction isn't optional but a necessity to avoid expensive mistakes.

The most common failure here is taking feature requests at face value. 

Users will ask for familiar things. “Add DMs.” “Show activity status.” “Make it more like X.” These requests feel concrete. They are usually shallow. If you build exactly what people ask for, they often just keep using the existing solution that already does it.

Your job is to dig. 

Stay rooted in pain points and underlying needs, not feature wish lists. Ask why they want it. Ask what breaks without it. Ask what they do today instead. Most feature requests are proxies for deeper needs. Build the proxy and users stick with the old solution anyway.

Common mistake: Running ambassador programs or surveys without training people to dig below surface-level answers. You end up with a pile of "insights" that are actually just noise and you make product decisions based on guesses.

Case study: Amori, an AI-powered relationship coach, faced an even trickier version of this problem. They're building in a totally new space—most people don't have relationship coaches, and many don't even have therapists. The fundamental questions weren't just "what features do you want?" but "where is this even accepted? How do people feel about discussing their personal relationships with an AI? Should it feel like a therapist? A coach? A friend?"

To answer that, the team ran structured, recurring conversations with users to pressure-test language, tone, and desire for feature sets over time. With eventual support from Ascend, they focused less on one-off feedback and more on building shared understanding and directional consensus across weeks of conversations.

This kind of clarity doesn’t require a massive research budget or external help. Teams can replicate it themselves by running small weekly interviews, tracking how underlying needs shift over time, and testing reactions to positioning and tone, alongside feature requests. The key is consistency and synthesis, not purely scale. 

In Amori’s case, that process helped them move faster with confidence, grounded in real user acceptance rather than assumptions.

3/ Series A stage: don’t let research die 

Primary goal: Keep qualitative insights strong and make sure they reach decision-makers.

This is where research quietly breaks down for most startups.

You have growth. You have dashboards. You have funnels and charts everywhere. Talking to users feels slower than looking at numbers. 

The temptation is to stop talking to users and let metrics do the work.

BUT data tells you what's happening, but not why. You can see that users are dropping off at step 3, but you have no idea what's going through their heads. You can see that a feature isn't getting used, but you don't know if it's a discovery problem, a usability problem, or a "nobody actually wanted this" problem.

Without qualitative signals, you cannot predict what to build next.

The benchmark: Keep up the same research cadence from pre-seed/seed—5-20 conversations per week. Make sure those insights are reaching the executive team, the marketing team, the brand team, and the PMs. Getting everyone in the same room (or at least watching the same recordings) to learn from users is how you keep customer obsession alive at scale.

What this looks like in practice:

  • Dedicated time each week for the team to review user insights together

  • A system for tracking feedback and connecting it to product decisions

  • Clear ownership of who's running research (even if it's not a full-time UXR hire yet)

Common mistake: PMs are stretched thin and conduct a few interviews here and there, but the insights aren't strong enough to guide big decisions. They're not coming from enough users, and they're not being heard by the right people.

Case study: One growth-stage sports app saw this firsthand. They knew from market data that sports betting tools were a big opportunity, but they had a tricky problem: their user base was split between high-end professionals with advanced sports and data understanding, and total novices. How do you build a single tool that serves both?

Previously, they'd been running one-off usability tests and scattered interviews. There was no structure and a lack of collaboration with their design and development teams. 

With help from Ascend, they eventually shifted to a more consistent, weekly research cadence that was closely tied to their product workflow. Instead of chasing feature requests, the team focused on understanding how each segment thought about betting, what problems they were actually trying to solve, and how the tool would fit into real usage. The feature that came out of this process ended up driving a 3x lift in overall app retention, and became one of the fastest features the team had ever designed and shipped.

Still, teams can do this without outside help by committing to a fixed weekly interview cadence, centralizing notes and recordings in one shared place, and making collaborative insight reviews a standing part of product and leadership meetings. Of course, the boots on the ground and ICP match matter, but the research leverage from consistency and cross-functional visibility shouldn’t be underestimated.

4/ Growth stage: institutionalize without losing signal

Primary goal: Build a real research function that can keep up with the pace of the business.

At this stage, it might make sense to hire a dedicated UXR person or team.

Most companies make a mistake here by prioritizing experience over fit with the ICP.

If you're building for Gen Z and you hire a 40-year-old UXR veteran, you're going to have a problem. No 19-year-old is going to tell a 40-year-old what they really think about your product. You'll end up spending money on recruiting tools and incentives just to get people in the room, and the insights will still be flimsy.

What to look for in a UXR hire:

  • Someone who's into startups and moving fast

  • Someone who can connect authentically with your target users

Common mistake: Over-engineering the research process. You hire a senior researcher who wants to build out a full research ops function with fancy tools and long timelines. Meanwhile, the business is moving fast and decisions are being made without user input.

Even at scale, the fundamentals matter. Get your whole team—executive leadership, product, marketing—learning from the same user insights at the same time. That's how you stay aligned and avoid building on assumptions that haven't been validated.

Case study: BeReal is a good example. At their scale, they were in need of fresh energy and ideas. They'd been relying on one-off conversations, listening to data, talking to users here and there, but nothing structured, and nothing that got the whole team on the same page. 

To address this challenge, they brought in Ascend and tapped into its university network, which gave them direct access to Gen Z users who could speak authentically about the BeReal experience.  

More importantly, Ascend worked closely with their executive leadership, product teams, and marketing teams simultaneously, so everyone was hearing the same insights and building from the same validated assumptions. This new system brought in a flood of new ideas for how to improve the BeReal experience heading into 2026 and real clarity on what to prioritize.

What founders get wrong across all stages

Regardless of where you are, these patterns kill customer obsession:

  1. Treating research as a phase instead of a habit. Research isn't something you do once before building. It's an ongoing discipline.

  2. Collecting feedback that doesn't get tracked. If insights aren't connected to product, marketing, or brand decisions, they're worthless.

  3. Over-surveying without finding the "why." Surveys tell you what, not why. If you're not following up with real conversations, you're secretly guessing.

  4. Waiting too long to systematize. The longer you wait to build a research habit, the harder it is to start.

  5. Making decisions based on surface-level requests. "Users asked for this feature" is not the same as "we understand the underlying need and this is the best solution."

The minimum viable research stack

If you take nothing else from this post, here's the baseline:

Talk to 5 users per week.

That's it. Five real conversations with real users, every week, at every stage.

Ask about pain points, not feature requests. 

Dig below surface-level answers. 

Track what you learn. 

Share it with your team.

Parting thoughts

Customer obsession isn't about caring about your users in the abstract. It's about building a system that keeps you connected to them, even when hiring, fundraising, and growth are screaming for your attention.

The playbook:

  • Pre-idea: Validate the pain with 10+ conversations

  • Pre-seed/Seed: Talk to 5+ users per week, stay rooted in underlying needs

  • Series A: Don't let research die. Keep the cadence and share insights broadly

  • Growth: Hire for ICP fit, not just experience

The companies that win in 2026 will be the ones who never stop listening.

Thank you, Will & Mac , for sharing your insights! 

If Gen Z is an important audience for what you are building, consider hitting them up for a free product consultation & get some advice on building your customer research process.

See you next Tuesday,

Leo

Follow me on X and LinkedIn

📥️ Want to advertise in Consumer Startups? Learn more.