The certainty metric: innovating your way out of the innovator’s dilemma

·4 min read

“Everything we do is now available for free, and yet people still pay us for it”. The CEO sat down, triumphant, and I wondered just how long that would still be the case.

The business had enjoyed thirteen years of sustained growth, but it faced a plateau. Its flagship products were developer productivity tools for the Microsoft SQL Server platform, and over the years the platform itself had steadily improved. The gap our tooling filled was shrinking. Two years earlier, a strategic sales push had locked customers into three-year deals. Those deals were now expiring, and renewal rates were poor. A classic innovator’s dilemma, compounded by platform dependency.

We built a team to tackle this head on. Step 1: identify the megatrends threatening the business. Step 2: find new product directions that could leverage our existing technology, content reach, and partnerships with companies like Microsoft. It was 2012. Big data, open source, and the cloud were reshaping the world of data and developer tooling beneath our feet.

A new rapid discovery process

We developed a set of certainty metrics to track the key risks facing each new product idea. The system focussed on a layered set of questions:

  • Is the problem real? Can we validate that people actually have this problem, beyond our own assumptions?
  • Can we reach them? Do we have channels into that market, and can we measure how well they work?
  • Can we solve it? Do we have the technology and skills to build something that gets the job done?

Some of these metrics were subjective, the best collaborative guess of the team members running the experiments. We pooled these informally across the most diverse team we could assemble (the only team in the company at the time with female developers, and fully cross-functional at our insistence), drawing on the logic of “The Wisdom of the Crowd.” We scaled the estimates relatively, since we were comparing certainty across a broad portfolio of ideas. That's how we balanced directional bets.

Other metrics were directly measurable. Page views, downloads, and the percentage of target users who indicated positive buying intent in interviews all fed in. We combined qualitative and quantitative signals into a weekly dashboard that tracked the certainty delta for each area we were investigating. This kept the team honest and drove momentum. It also let us context-switch when it made sense, rather than stalling while we waited for longer experiments to play out.

Each week the goal was to advance at least one idea in each of three tiers: high confidence prototypes, medium confidence promising areas, and blue sky concepts. To move an idea forward we used whatever method matched the current level of uncertainty:

  • Blog posts, mailings, and lightweight marketing to test interest
  • Paper prototype interviews to validate the interaction model
  • Full prototypes for the strongest candidates

We built a community of lighthouse customers from our existing base and through user groups and conferences in each technology area. These became our primary source for pressure-testing ideas early, and for seeding usability trials on the more validated prototypes.

Results

The process gave us a rapid discovery pipeline backed by data that justified the development spending. Just as importantly, it let us reject ideas cheaply. Some were too early for the market. Others faced competitive pressure we couldn't overcome, or would have required more market education than we could afford.

One concrete result was a Hadoop file management tool aimed at Windows-based developers, a core demographic for the existing business. That seed grew into a broader cloud file system product for Azure.

The process also drove several successful acquisitions in NoSQL management, which are still running well today under their respective brands. A spin-out company launched from the unit too, applying a version of these methods in the Salesforce developer market.

We reviewed the certainty metric weekly with senior management. They found it an effective way to communicate progress and learning in new market spaces without getting lost in the details of individual experiments.

Thoughts on improvement

The certainty metric drove momentum across around a dozen simultaneous product ideas with small teams (in some cases just four or five people). Where it fell down was in the transition from discovery to build. Interleaving multiple experiments allowed rapid testing and continuous progress, but the context switching cost hurt team focus once a strong candidate emerged. At that point we found we had to carve out dedicated space for the winning idea to grow, using more conventional agile lean methods so developers could actually focus.

Want to go deeper?

I coach technical leaders on exactly these challenges — from product lifecycle transitions to AI strategy.

Learn about coaching →