Ian Walton Ian Walton

Adopting the Lean Startup Methodology

Introduction

It baffles me how many modern frontend applications plan new features through what can only be described as well-intentioned guesses.

On a typical project, the product owner receives a feature request from stakeholders and then prioritizes it relative to a vague sense of worth before it gets implemented and shipped to production. Unfortunately, that’s usually where the feature’s value stops being tracked or understood. How do you know users are engaging with the feature? How do you know it accomplished its purpose? What was its purpose to begin with?

Like the Agile Manifesto was to Waterfall, the Lean Startup methodology is an earth-shattering approach to the software development lifecycle that emphasizes small, measurable hypotheses deployed as incremental units of work and analyzed through unbiased datapoints.

The Hypothesis

A core tenant of the Lean Startup methodology is the hypothesis. The hypothesis is a feature with a measurable definition of success. It could be a feature that increases conversion rates or session times or user count, etc. In my experience, the business will arrive at their “core” data points over time as the process evolves.

Anyone in the company, from developers to testers to product owners, should feel entitled to propose a hypothesis. Not only does it empower team members to take ownership of their work, it increases the opportunity for a truly great insight to bubble up.

The Implementation

The first version of the hypothesis (prototype) should be the smallest unit of work necessary to evaluate it’s success. Defining the prototype can be a challenge as developers and stakeholders often have different definitions of done. Try to avoid prototypes that require a financial investment, such as integrating a 3rd party solution.

If your application has a sufficiently large user base, consider testing the hypothesis through an A/B test (eg. diverting a portion of your users to the new experience). An A/B test is an easy way to compare the existing experience against your prototype when evaluating the hypothesis.

Evaluating the Hypothesis

Hopefully it goes without saying that your measurable definition of success should be a data point accurately captured by your application. If it isn’t yet captured, maybe your team’s first hypothesis should be “if we add this metric we’ll have more confidence we are successful in the future”.

Assuming your metric is tracked and your prototype is in production, how do you evaluate whether your hypothesis held true?

After a period of time, typically a few weeks, evaluate the hypothesis metric against a similar dataset. I’ve personally found the most effective dataset for comparison to be the last time the feature was altered. For example, if your hypothesis was “adding a new graph to the homepage will increase monthly C-Suite user logins by 1%”, compare the number of C-Suite user logins from the last time the homepage was changed. Alternatively, if you recently evaluated a prototype targeting C-Suite user logins percentages, compare it against that instead.

Evaluation Pitfalls

Be aware of anomalies that may skew your metrics. If your application deals with taxes, is it April 15th? Did your marketing team run a big promotion the week before?

Also, avoid testing too many hypotheses concurrently, especially if they’re evaluated against the same metric.

Proceeding from the Prototype

Did a successful hypothesis radically change the desired metric? Iterate on it, testing further hypotheses against the same metric.

Did the hypothesis have a neutral/minimal effect on the desired metric? Deprioritize similar hypotheses for a time and focus on the radically successful features. It still wasn’t a failure - it may require future ideas to unlock its full potential.

Rarely have I seen a hypothesis have a negative effect on the desired metric unless it was caused by a bug or regression. Thankfully, because prototypes should be minimal units of work, it should be trivial to revert the feature altogether. However, before you abandon ship, ensure the metric is being tracked accurately. I can’t tell you how many times I’ve discovered bad datapoints after the hypothesis tracking that metric gets into production.

Conclusion

There are four major advantages to adopting this methodology:

  1. Small units of work are safer and easier to deploy.
  2. Everyone in the company can see whether a feature was successful or not.
  3. The responsibility for a product’s success is no longer squarely on the divinations of the product owner.
  4. The team feels a sense of ownership

In order for software development to gain recognition an engineering principle, especially the oft overlooked frontend side of software, we need to begin evaluating success by unambiguous, quantitative measurements. This methodology is the safest, fastest, most effective mechanism to achieve this goal.

As always, thanks for reading. - Ian