Why Deep Research Still Matters

In 2025, our team spent a significant amount of time doing deep research in healthcare, from medical devices to mental health. Much of that work involved sitting with complexity: listening carefully, testing assumptions, and trying to understand not just what people do, but how they make sense of deeply personal situations.

In an environment obsessed with speed, optimization, and quick validation, it’s easy for research to become purely tactical: testing messages, refining flows, or validating ideas that already exist.

But some of the most valuable research doesn’t improve solutions, it reshapes direction.

Across recent healthcare projects, I’ve been reminded how powerful deep, qualitative research can be when paired with scale. Surveys help identify patterns. They tell you what is happening and how often (and designing good surveys is hard, much harder than it’s often treated, but that’s a topic for another post). But they rarely explain why, especially in domains shaped by emotion, stigma, trust, and life context.

That’s where in-depth interviews matter. They’re often more cost-effective than large-scale surveys, but they ask something different of teams. The work is demanding. It’s emotionally wearing. And it’s messy, much like the people it seeks to understand. Not every interview lands. Some are flat. Some feel like dead ends.

But occasionally, you hit an edge case, a story, a contradiction, a way of articulating an experience, and you feel it immediately. Goosebumps. That’s the signal. It’s the moment you realize you’re no longer collecting data; you’re uncovering something that can fundamentally change how the problem itself is understood.

AI absolutely has a role to play in modern research. It can accelerate synthesis, surface patterns across large volumes of qualitative data, and help teams navigate complexity more efficiently. But it’s not as simple as dumping transcripts into a tool and waiting for insight to appear. The most meaningful sensemaking still requires being embodied in the research, hearing hesitation in someone’s voice, noticing where a story tightens or unravels, remembering the context in which something was said. AI can help organize and interrogate the material, but judgment, interpretation, and decision-making come from researchers who have sat with the messiness firsthand. (How teams of humans work alongside teams of AIs in research, especially when different people are using different LLMs, is a topic for another post.)

The most important outcome of this kind of research isn’t a quote or a persona. It’s that it changes the questions teams ask.

Instead of debating features or messaging, conversations shift toward:

  • What does “value” really mean to different people?
  • Why do people disengage even when they believe in the product?
  • Where does trust get earned, or quietly lost, along the journey?

This distinction matters even more in healthcare. Outcomes aren’t transactional. Progress isn’t linear. And behavior often reflects life circumstances rather than product performance.

Ultimately, the hardest problem teams face isn’t how to execute, it’s knowing what to do next with limited resources. Deep research helps replace intuition-driven momentum with clarity grounded in real evidence. Passion and creativity still matter, but without shared understanding based in data, they’re just motion. In complex domains like healthcare, knowing what to do, and why, is still the most important advantage a team can have.

Looking ahead to 2026, we’re excited to continue working with teams who want to go beyond optimization, teams willing to use deep research as a foundation for reimagining the future of healthcare experiences, products, and services with clarity, care, and intent.