How to measure success for B2C software

Jack McDonald
5 min readFeb 14, 2021

Building ecommerce software is like “throwing shit against a wall and seeing what sticks.”

But if you’re not measuring what you’re building, you’re effectively throwing with a blindfold on. You can’t tell which ideas stuck and which have slid unceremoniously to the floor. You can’t adjust your aim or improve your technique. So the chances of increasing the percentage of good throws is close to zero.

But you don’t have to throw blindly. Measuring success in a B2C company is easy. With some free third-party software, you can track the steps a user takes before they complete a purchase.

This is called a funnel because the typical shape visualising the flow of users is similar to a funnel you might’ve chugged beer out of at uni. In a generic B2C company, the behaviour of users looks like so:

Funnel for a typical ecommerce site
A typical ecommerce funnel where some visitors convert to buyers

Product innovators are try to maximise the total number of people who make a it through the funnel. There are two ways they do this.

The first option is to increase the number of people who visit the site, aka acquisition. Assuming the completion rate of the funnel stays constant, additional visits will result in additional sales.

The second option is to increase the percentage of people who make it through each step, aka conversion.

Increasing acquisition and conversion to increase total sales

Product innovators are constantly thinking of cost effective ways to increase conversion and acquisition, and ultimately increase the bottom line.

Example: Increasing completion of your ecommerce checkout

Let’s pretend you work at a typical ecommerce company.

When looking at the data, you see countries with immature online shopping are dropping off at high rates during checkout.

During user interviews, customers from these countries mention they’ve been burnt by credit card fraud before and are hesitant to purchase online. Some have paid for goods which have never turned up.

You hypothesise that emphasising your secure checkout and robust returns policy on the checkout page will make users more trusting and encourage them to enter their card details and complete the purchase.

You flesh out some prototypes and product documentation to convince other stakeholders to endorse your idea.

The product doc lists out the target uplift, resulting revenue and how much it will cost. It will look something like this..

Predicting and measuring the ROI for a new product initiative

The baseline is what you think will happen if nothing changes. The target is the impact you hope your new release will have. You fill in the “Actual” column once you’ve released your changes and have re-measured the conversion of your users.

But in a company with multiple product streams continuously releasing new functionality, how do you know that your initiative was the one that moved the needle? Or how do you know it’s not just seasonal? You don’t want to have to wait 12 months to find out if your release was a success or simply the time of year you launched.

There is a simple solution. You A/B test your changes.

In an A/B test, you serve 50% of your users the current checkout (control) and the other 50% the checkout with the changes (variant.)

The original checkout (left) and the experimental checkout (right)

You then measure the behaviour of the two checkouts over time to see which one has the highest completion rate.

During your A/B test, the performance of each cohort will fluctuate. One day the new changes may be outperforming and you’ll pat yourself on the back for being a product genius. A few days later momentum might change. The old site might be performing better and the project suddenly looks like a waste of time.

The results of an A/B test oscillate over time

To remove this uncertainty and be sure your new checkout has moved conversion in the right direction, you need to have a large enough data set. This is known as statistical significance.

The size of the data set depends on the size of the change you’re trying to measure. It also depends on the confidence you want to have in your results.

In our example, you’re targeting an increase in conversion from 5% to 7%. You can use an online Sample Size Calculator to determine how many subjects you need to have 95% confidence that your changes actually increased conversion and that it wasn’t just random.

Calculating sample size for a statistically significant A/B test at 95% confidence

Based on your estimated uplift, you need 3930 users (1965 per variant) to verify that the new checkout performs better or worse than the old checkout.

In this example, your hypothetical ecommerce site gets 1,000,000 visitors a year. Assuming 10% of them reach checkout and see either Control A or Variant B, it will take about 15 days to get a statistically significant sample size.

You MUST run your experiment until you get the required sample size. Ending the test early can produce misleading results.

Measure more. Build faster.

The beauty of this method is that it endorses the build-measure-learn principle.

You can spend less time wondering what might work and more time knowing. Product launches become less risky because you can revert any unsuccessful changes. And you spend more time designing and building new features, rather than debating with those that hold the purse strings.

Now that you’ve got an objective way to measure success of your B2C product launches, it’s time to take the blindfold off and start throwing with accuracy.

--

--