The Number That Was Not the Point
The frontend codebase I had inherited had below 1% test coverage. Eight weeks later it was above 16%. But the number is not the story. The story is what happened in the weeks I was not writing any tests at all.
There was no CI enforcement on coverage. No shared testing patterns. No culture of writing tests. Nobody was opposed to testing in principle. It just was not part of how the team worked. A quarterly goal was set: get frontend coverage to 10%.
The Trap
The obvious move was to grind out tests myself until the number hit 10%. I could have done it. I would have hit the target. And then what? The moment I stopped writing tests, because I had other priorities, or got pulled onto a different project, or left the team entirely, coverage would flatline. I would not have built a practice. I would have built a dependency on one person. The metric goes up, but nothing about the team's relationship to testing changes.
I have seen this pattern before in other domains. One person becomes the single point of responsibility for a quality measure, hits the number through individual effort, and everyone else learns that they do not need to think about it because that person has it covered. The target gets met on paper. The underlying problem remains untouched. I wanted to avoid that trap from the start.
Three Moves
What I did instead came down to three things, none of which involved writing hundreds of tests myself.
First, I shipped a CI pipeline with coverage reporting integrated into pull requests. Every PR now showed its coverage impact, visible to everyone on the team, not just me. This was not a gate that blocked merges. It was a mirror. When you open a pull request and the coverage report is right there in the checks, you start noticing whether your change helped or hurt. That visibility changes behavior more reliably than any mandate.
Second, I wrote tests for the shared common components, not to boost the coverage number, but to create living examples. When someone asks "what should a test look like in this codebase?" the answer should never be "go read a style guide." It should be "look at the common component tests." Real tests in the actual codebase, using the actual utilities, following patterns that anyone can copy. Examples beat documentation every time.
Third, I proposed a lightweight team testing policy. Not mandates. Not coverage gates that block merges. Just shared expectations about when and why the team writes tests. The kind of agreement that makes testing a default rather than a special occasion. Low friction, high signal.
The Inflection Point
This is the part that mattered most. A few weeks in, coverage kept climbing during weeks where I shipped zero test-related work. Other engineers were writing tests on their own. Not because anyone was standing over them. Not because a gate was blocking their pull requests. They were writing tests because the CI pipeline made coverage visible, the examples made patterns clear, and the policy made it part of the team's shared expectations.
One teammate started owning the test factory utilities, building shared helpers that made everyone else's tests easier to write. That was a signal I did not expect so soon. The practice was decentralizing. I was no longer the single point of failure for test coverage. Someone else had picked up a piece of the infrastructure and was making it better for the whole team. That was the moment I knew it was working.
The Numbers
Below 1% to above 16% in roughly eight weeks. From about 600 tests to over 1,250. From 76 test suites to 101. The quarterly target was 10%, and we exceeded it by more than 60%.
But the number I actually care about is the coverage growth that happened in weeks where I personally did not drive it. That is the difference between a metric someone hit and a practice a team adopted. If the number only moves when one person pushes it, you do not have a practice. You have a bottleneck with good intentions.
The AI Elephant in the Room
AI is exceptionally good at writing tests. Given a component and a pattern to follow, it can generate comprehensive test coverage in minutes. This changes the equation fundamentally. The bottleneck for test coverage was never that tests are hard to write. It was that nobody had set up the infrastructure and expectations to make writing tests the default behavior.
AI removes the last remaining excuse. There is genuinely no reason to ship applications without robust test coverage in 2026. But AI does not build the practice for you. It still needs the CI pipeline to make results visible. It still needs the examples to define what "good" looks like. It still needs the team norms to make testing expected rather than optional. Your job is not to write all the tests. Your job is to build the system that makes writing tests, whether by humans or AI, the path of least resistance.
A Counterpoint Worth Sitting With
Simon Willison made a point on Lenny's Podcast that has been rattling around in my head since I heard it. We used to see an application with comprehensive test coverage and assume the engineering team behind it was excellent. Robust tests were a reliable signal of engineering quality and discipline. But now that AI can generate those same tests in minutes, that signal is weaker than it used to be.
Willison suggests that a better measure of engineering quality might be actual usage. Are real people using the product and getting value from it? I think he is right, and I think it actually strengthens the argument I have been making in this post. The point was never "write more tests." The point was "build systems that give you confidence your product works for real users." Tests are one input into that confidence. Usage data is another. The best engineering teams have both, and they understand that neither one alone is sufficient.
The Lesson
If you are the only person who can sustain an improvement, you have not improved anything. You have added a dependency. The goal is not the number on the coverage report. The goal is building a system where the number takes care of itself because the team has internalized the practice.
Infrastructure plus examples plus a lightweight policy will always beat heroic individual effort. The CI pipeline makes the work visible. The example tests make the patterns copyable. The shared agreement makes the behavior expected. Each piece reinforces the others, and none of them require you to personally write every test.
Set it up right, and the system keeps compounding long after you have moved on to the next problem. That is the difference between hitting a target and changing how a team works.