I am a senior data scientist at LinkedIn working on SEO and guest experience. I presented at SMX London last month about how to apply data science in SEO. The session covered topics including metrics, A/B testing, SEO vs. SEM cannibalization testing and machine learning for content quality. Here are a few questions from session attendees with my responses.
For A/B testing, do you use any specific tools/processes?
We have an internal infrastructure that supports user-friendly A/B testing setup and does automatic statistical analysis for key metrics. If you are interested, you can see this paper [pdf] about how we do experiments at LinkedIn. If you do not have the internal tools available, you can randomize your target set of URLs and compare your metrics from two groups of URLs using open source statistical test solutions such as R, Python Scipy package, etc.
How do you sample SEO A/B testing? For how long do you run it?
At LinkedIn’s scale, where often we have more than hundreds of thousand URLs in each experiment, we simply randomize URLs into two groups and compare their metric impact. However, when we start the experiment, we roll out the experiment feature gradually from a small percentage to 50% to minimize risk from the experiment.
In terms of the experiment duration, it depends on the type of experiment and the type of product we experiment on. But generally, we try to run it at least a month to give enough time for search engines
Read more here: http://feeds.searchengineland.com/~r/searchengineland/~3/VFMnwv7xb_A/smx-overtime-heres-how-to-make-seo-gains-through-data-science-318065