How Session Replays Killed Two Hypotheses and Found the Real Product
The Question That Started It All
I set out to answer a simple question: which view do users prefer, a kanban board or a table list? I ended up discovering neither one mattered.
Here is the context. I was working at a legal tech platform, about eight weeks into the job. The company ran a client portal used daily by roughly 300 active users across dozens of law firms. These users tracked claimants, checked statuses, and communicated with the company through the portal. A recent feature change had swapped the default landing page from a kanban board view to a tabular list view. The board was still accessible via a toggle. Natural question: are users adopting the new list view, or are they going back to the board?
I pulled up the product analytics tool and started digging. What followed was a three-week process that killed two hypotheses, uncovered a product insight nobody expected, and connected it to a business case that changed the conversation entirely.
Hypothesis 1: The Board Is Better
The data looked clear. Over a three-week window, the board dominated the list by a factor of five to ten in both pageviews and unique users per weekday. Lifecycle analysis showed the board had 14x better week-over-week retention: 171 returning users compared to 12 for the list. Path analysis told an even sharper story. 73% of board visitors clicked through to a client detail page. Only 2.9% of list visitors did the same. And in the entire measurement period, exactly one person ever voluntarily toggled from the board to the list.
On the surface, this was an open-and-shut case. Users preferred the board. The list was a dead end. The board drove work. Case closed, write up the finding, move on.
I almost did exactly that.
A Colleague Asks a Better Question
Before I wrapped up the analysis, a colleague asked a question that stopped me: "Could it be that users glance at the list for data they cannot see on the board cards?"
It was a sharp observation. The list displayed columns that were not visible on the kanban cards: blocked status, total lien counts, upcoming due dates, assignee information. Maybe that 2.9% click-through rate was not failure. Maybe users were getting what they needed from the list without clicking through. They glanced at the tabular data, got their answer, and moved on.
This reframed the question. Instead of "which view is better," the refined hypothesis became: the two views serve different jobs. The board is for daily workflow, clicking into individual claimants to check messages and take action. The list is for quick data lookups, scanning columns for information that does not require opening a detail page.
It was more nuanced. More honest. But it was still unvalidated. I was reasoning from quantitative data alone, constructing plausible stories about behavior I had never actually observed. I needed to watch what users were doing with my own eyes.
What the Session Replays Actually Showed
This is the part where both hypotheses died.
I used query tools in the analytics platform to identify three cohorts of sessions worth watching. The first cohort was "switchers," users who had visited both the list and the board in the same session. I found five of these sessions over a two-week window. The second cohort was "board workers," users with heavy board views and lots of client detail activity. The third was "list-only" users.
I started with the switchers. Five sessions. Every single one did the same thing. The page loaded, and the user completely ignored whatever view was on the screen. Without hesitation, they went straight to the global search bar tucked in the left sidebar navigation. They typed a name, clicked the result, landed on a client detail page, checked messages, and then searched for the next name. Search, click, messages, repeat. Not one of them interacted with a single card on the board or a single row in the list.
The board workers told the same story from a different angle. These users had heavy client detail activity in the analytics data, which I had assumed meant they were clicking through the board to get there. They were not. They arrived at client detail pages directly via bookmarks or shared URLs, hopping from one claimant to the next. The "board pageviews" in the quantitative data were incidental page loads. The board happened to be the page that rendered when users navigated to a certain URL, but they were not interacting with it. They were passing through on their way to somewhere else.
Both of my hypotheses were wrong. The board was not better than the list. The two views did not serve different jobs. Neither view was the product. The search bar tucked in the left sidebar was the thing users actually relied on. They bypassed whatever we put in front of them to get to it.
Proving It Was Not Anecdotal
Five session replays is a pattern, not proof. A skeptic could rightly point out that I had cherry-picked a handful of sessions and built a narrative around them. So I went back to the quantitative data, this time asking the right question.
Over seven days, 52% of all portal visits involved the global search feature. Of those searches, 87% led to a result click, meaning users were consistently finding what they wanted. The rate was remarkably stable across every single weekday in the measurement window. No outliers, no anomalies. This was not a fluke. It was the dominant behavior pattern on the platform.
In the process, I also caught a methodology error that had been quietly inflating our numbers. 48% of what we had been counting as "sessions" were actually tab-resume events. These were users who had left the portal open in a browser tab, walked away for thirty or more minutes, and came back. The analytics tool started a new session when they returned, but the page never actually reloaded. These phantom sessions inflated our session count and deflated our engagement percentages. Correcting for this made every metric more accurate and strengthened the core finding.
One more detail sealed it. About half of all search interactions happened while users were technically on the board page. They landed there because it was the default, and then immediately navigated to the search bar in the left sidebar. The board content below was scenery they scrolled past. The page was a loading screen for the feature they actually wanted.
The Business Case Nobody Asked For
I had the product insight. But "users prefer search" does not move a business conversation. If I walked into a meeting and said "hey, it turns out nobody uses the board or the list, they just use the search bar," the response would be polite interest followed by no action. I needed to connect the behavior to something the business cared about.
So I cross-referenced portal engagement data with business metrics from the accounting reports. The top six firms by portal usage, roughly 7% of all firms on the platform, generated about 40% of total revenue. The average revenue per top portal firm was 5.7x the overall firm average. These six firms were not just the most active portal users. They were the engine of the business.
I was careful to note: this is correlation, not proven causation. These might simply be the biggest firms who use the portal more because they have more claimants and more business volume. The portal engagement could be a result of their size rather than a driver of their revenue.
But the actionable implication holds regardless of the causal direction. The firms generating 40% of revenue are the heaviest portal users. Portal quality directly affects the experience of the most valuable clients. That reframed the conversation from "here is an interesting UX finding" to "here is a business case for improving the primary workflow of your highest-value customers." The difference between those two framings is the difference between a Slack message that gets a thumbs-up emoji and a meeting that gets scheduled.
What I Learned
Three layers of evidence revealed three different truths, and no single layer was sufficient on its own.
The quantitative data told me the board was "winning." Traffic was higher, retention was better, click-through rates were stronger. If I had stopped there, I would have concluded that the board was the superior view and started optimizing it. I would have been polishing something nobody was actually using.
The session replays told me users were bypassing both views entirely in favor of search. This was the qualitative layer, the one that revealed actual behavior rather than the proxy metrics I had been interpreting. Watching real users do real work for even a handful of sessions overturned weeks of quantitative analysis. The numbers told me what was happening. The replays told me what it meant.
The business data told me the users doing this generate a disproportionate share of revenue. This was the "so what" layer, the one that makes anyone outside a product team pay attention. An insight without business context is trivia. An insight with business context is a strategy.
Quantitative data tells you what is happening. Qualitative observation tells you why. Business data tells you so what. You need all three.
The hypothesis I started with, "which view is better," was almost comically wrong by the end. But that is the point. The willingness to let each layer of evidence kill your current idea and reshape the question is the most valuable discipline I practiced in this entire process. If I had committed to "the board is better" after the first round of data and started optimizing board card layouts and kanban column logic, I would have been investing engineering effort into something that was, functionally, a loading screen. The real product was a search input field in a sidebar that nobody had thought to examine until the session replays made it impossible to ignore.