In optimization lore, AB testing began in 2000 with Google and its first attempt to optimize the number of search results it displayed. The results were bunk apparently due to slow loading times skewing the results. Since then, AB testing has become an industry standard—and even an industry! There’s a lot of discussion, documentation, and direction available to conduct AB tests.
But as with most things, there’s room to grow.
Today, AB testing has been joined by multivariate testing, funnel testing, and more. Each type of variable testing has given businesses a new way to validate their ideas and ultimately drive real business results.
A typical variable test will include changes to the site design, split the site traffic, and observe which change gets a better response. This is testing concepts.
But what if you could flip the script and instead test the audience?
Okay, okay, this does happen today and often. You can test a design change on people who sign in to their account against people who don’t, or people who shop for men’s fashion against people who shop for women’s fashion. The problem with these is that it’s still lumping large groups of people together based on generic factors.
There’s still more room to go. What if we could conduct an AB test with extreme levels of specificity in the audience we choose to target?
An Example of Highly Targeted Split Testing
Imagine an e-commerce website that sells women’s denim jeans. Women’s clothing in general is known to have inconsistent sizing and pants are especially notorious for this.
Some customers are there just to browse, some are there to buy another pair of the jeans they already own, and some are considering purchasing their first pair of your brand of jeans.
The average AB test might ask: does showing a message about the easy return policy increase the conversion rate? And then test the idea by showing half of the site visitors the message.
But only one of the mentioned customer types will care about the easy return policy! The customer who is just there to browse will be disrupted by the irrelevant message. The customer who already loves the jeans knows she won’t want to return them. And the customer who is considering buying her first pair will be put at ease knowing she won’t be stuck with ill-fitting pants.
However, if you conduct a highly targeted split test you could ask the question: does showing a message about the easy return policy to hesitant shoppers increase the conversion rate?
The first two types of customers are left alone or even targeted with a different test. Meanwhile, you gain a deep insight into the third segment of your traffic. The half of hesitant shoppers that saw the return policy message converted 30% more than the half of hesitant shoppers who did not see the message. Success!
Of course, that example was fictional. Is this even possible in real life?
Real Life Highly Targeted AB Testing Technology
Running this kind of highly targeted split test is a technological challenge. You have to be able to identify and target with extreme specificity and do so rapidly. And to do this you have to have a lot of processing power.
Enter machine learning.
Granify’s machine learning technology can process more than 400 behavioral data points every second, for every online shopper it has access to. Hidden in this vast amount of behavioral data are patterns of digital body language. So, when an online shopper’s mouse movements, scrolls, and hovers reveal they are hesitant the machine learning technology can target them with a relevant message.
It’s not only real and available, it continues to prove it is valuable too. On average, Granify’s machine learning technology raises overall e-commerce revenue by 3%-5%.
How? As the saying goes, retail is detail.