> You gotta know when to hold 'em, know when to fold 'em, know when to walk away, know when to run *Kenny Rogers*

Sometimes, when you are running a PPC campaign, you find a keyword, ad group (or even a campaign!) that has low volume. So low that you can go a while without **any** conversions. When you find yourself in this situation, it's important to know (as Kenny would say) *when to fold 'em*.

The way I have approached this problem, is to select a conversion rate (whether it be a target, the conversion rate of other elements of the campaign or a historical conversion rate) and then pose the question as:

**How many clicks have to elapse without a conversion before we are sure (with 95% confidence) that the conversion rate is lower than our estimated conversion rate?**

The following table shows, for sample conversion rates, how many clicks have to go by without a conversion before you can be pretty sure the actual conversion rate is lower than the target:

Historic conversion rate | Number of clicks without conversion that should worry you (95% level) |
---|---|

0.1% | 2995 |

0.5% | 598 |

1.0% | 299 |

1.5% | 199 |

2.0% | 149 |

2.5% | 119 |

3.0% | 99 |

5.0% | 59 |

10.0% | 29 |

##Why you need advanced maths to do PPC

You don't actually need to have a degree in probability and statistics to manage a PPC campaign effectively. Creativity and an attention to detail are probably the greatest requirements. Calls like the one that is the subject of this post can often be made by 'gut feel' without needing to know exact probabilities.

Having said that, gut feel doesn't scale. If you are running dozens of campaigns, with hundreds of ad groups and many thousands of keywords, you need to be able to automate some elements of the process. A big part of where we add value for our clients is by combining industry knowledge, creativity and that attention to detail with our mathematical approach. We build tools that leverage our maths degrees (I knew there was a reason for sitting through advanced probability and stochastic modeling) into scalable solutions that help us get deep into the heart of what's going on with a campaign, no matter what size it is.

##The maths bit - warning probability ahead!

(unless you just want to see us doing some clever stuff!)

So, the question can be reframed as:

> How many clicks have to occur without a conversion before we are 95% sure that the conversion rate is less than p?

The generalised case (I have n conversions in C clicks and a supposed conversion rate of p, should I be worried?) is harder. We have a tool in development that will do this kind of statistical analysis, but it involves beta functions and I can't do it in my head...!

The basic case, however, is pretty straight-forward undergraduate statistics.

Let W be the random variable 'number of clicks we have to wait before we get a conversion' (on whatever subset of the account we are watching). Then W is a random variable known as the *waiting time* of a sequence of *Bernoulli trials*. It turns out it is a *geometric* variable with mass function f(k) = p(1-p)^(k-1) (integer k) (*).

Let p be the probability a click converts => probability a given click doesn't convert = 1-p.

Then **P**(W > k) = (1-p)^k (essentially, the probability of having to wait longer than k = probability none of the first k convert).

So, if we want to be confident at the 95% level that there is something wrong (i.e. we should have already had a conversion), we need to find the k such that:

0.05 > **P**(W > k) (i.e. 5% chance we wouldn't have had a conversion within k clicks)

=> 0.05 > (1-p)^k

=> log(0.05) > k*log(1-p)

=> k > log(0.05) / log(1-p) (reverse inequality since log(1-p) is negative)

To be 99% certain, we'd need to see k > log(0.01) / log(1-p) clicks and no conversions.

Note that you can't generalise this by simply looking for gaps of k clicks when there are no conversions over a larger number of clicks including a number of successful conversions. This is because, while 95% seems like a high number, it does mean that out of every 20 sets of k clicks, there will be one with no conversions in it. So as soon as you start looking at multiple sets of k clicks, the results no longer hold.

It's a good quick 'n' dirty way of working out statistical significance on small sample sizes, however.

** Probability and Random Processes, Grimmett and Stirzaker*

### Get blog posts via email

##### About the author

Will founded Distilled with Duncan in 2005. Since then, he has consulted with some of the world’s largest organisations and most famous websites, spoken at most major industry events and regularly appeared in local and national press. For the... read more