A/B testing, split testing, or bucket testing is a controlled comparison of the effectiveness of variants of a website, email, or other commercial product.
- Stackoverflow.com Wiki
It can be wise to run your A/B tests twice. Get results. Then run the same test again. You’ll find that doing so helps to eliminate illusory results.
A/B Testing is not a new concept, and it’s conceptually straightforward – however implementing it effectively in your own websites can be very difficult without specialised tooling. However if you’re hosting your sites on Microsoft Azure Websites, it’s incredibly easy to set this up. At the time of writing (November 2014), Azure Websites Testing in Production is fully implemented in the New Portal, but documentation is very limited – hence my decision to create this article.
Competition in the App Store is fierce, and if an indie app developer wants to get noticed, having an amazing product is no longer enough.
A/B testing is frequently billed as a scientific way to validate design decisions. Occasionally referred to as split testing, A/B testing is a simple process that on the surface appears to promise concrete results.
A/B testing is used far too often, for something that performs so badly. It is defective by design.
Ever wonder how Netflix serves a great streaming experience with high-quality video and minimal playback interruptions? Thank the team of engineers and data scientists who constantly A/B test their innovations to our adaptive streaming and content delivery network algorithms.
Have you ever wondered why Netflix has such a great streaming experience? Do you want to learn how they completed their homepage plus other UI layout redesigns through A/B testing?
Dealing with data means becoming comfortable with uncertainty, and A/B tests make this reality extremely apparent. Handling uncertainty wisely and using statistical tools like A/B tests well can give us the ability to make better decisions.