Introducing Nick Schembri: Nick is one of our talented Product Strategists. From owning his own property investment company at the ripe old age of 17, to building startup tech company Dash Tickets and successfully expanding it to international acquisition, Nick has the business experience and marketing skills to ensure apps hit the market with commercialisation at the forefront.
It’s pretty common sense that the best apps are being constantly updated to make them the most engaging, revenue hustling tech products they can be.
If you plan on making changes to your app post launch (and you definitely should!), do you plan on watching your key app metrics go up and down with no idea why?
I’m guessing not.
I’m guessing you’d rather know what decisions actually made a difference to the numbers, so you can use that knowledge as ammo for future decisions.
But with so many other aspects of your business to be concerned about, how can you easily review, implement, and test aspects of your mobile application?
Enter…A/B testing, sometimes called ‘split testing’.
What is AB testing?
A/B testing is a method of comparing two versions of a webpage or app against each other to determine which one performs better.
It involves making ONE change at a time to an existing feature or screen in your app, for example, and seeing if there are any significant changes to the metric you’re measuring.
This way, you can pinpoint exactly what effect (if any) the change had on user behaviour, sales, retention rates, or anything else you want to measure.
- Ever wondered what effect changing the colour of the ‘buy’ button on your e-commerce app would do?
- What about giving users the option to login with Twitter instead of Facebook?
- What would happen to push notification opt-in rates if you prompted users on their 3rd session versus the 1st session?
These are all questions that A/B testing can answer for you.
A/B testing for web pages has been commonplace for years now, but A/B testing for apps is a relatively new industry (just like apps themselves!)
Here are 4 top tips for ensuring your AB tests get real, actionable results.
- Start thinking “Segment first”
Consider segmenting your users into different buckets and A/B testing based on demographics like age and gender, or behavioural characteristics like acquisition channel.
Mobile visitors perform differently than desktop ones, new visitors are different than returning visitors, and e-mail traffic is different than organic.
You may get distorted data if you are treating all your users the same.
Where you get really interesting results is when you can figure out what segments display the ideal characteristics that you wish all your users did.
This will reveal where you need to funnel more of your budget into and which segments you need to nurture more.
For example, you might find that you’re getting the most downloads from a Facebook ad, but the downloads you acquire from organic search on the app store are much more likely to become active, engaged users.
- Get Sophisticated
‘Big data” is becoming increasingly important for the modern day marketer.
Gone are the days when a marketer could justify marketing spending on a campaign because they “thought it might resonate with the audience”.
In fact, many successful marketing teams have a dedicated Data Analyst on staff these days.
Sophisticated data is harder and more time consuming to capture, but it answers the WHY questions as well as the WHAT questions of user behaviour.
So, don’t shy away from sophistication. Dive deep!
- Double Up On Tests
A/B tests are designed to imitate scientific experiments, but I’m guessing if you’re reading this, you don’t work in a lab.
Marketers will often make mistakes in their A/B tests that raise doubt about the validity of results, like:
- Certain extraneous factors simply can’t be held constant
- The test wasn’t run for long enough
- The test wasn’t able to be done with a large enough sample size
For this reason, some A/B tests can produce illusory results.
It can be wise to run your A/B tests twice. Note your results, then run the same test again.
If the results of the first test aren’t reliable, you’ll see a noticeable difference with the second.
But, if the uplift is real, you should still see uplift during the second test.
This approach isn’t fail-safe, but it will help.
- Run Tests For Longer
When it doubt, run your tests for longer than you think you need to.
Some experts suggest running your experiments until you’ve tracked thousands of conversion events.
But, in truth, it’s not the number of conversions that matters; it’s whether the time frame of the test is long enough to capture variations on your site.
Types of variations:
- Do you get different conversion patterns during the day vs night?
- Weekdays vs weekends?
- Start of the month vs end of the month?
If you can, allow for a few cycles of variability in order to normalise your data.
Who can help me do this?
There are a number of companies out there who have systems to carry out this form of testing.
I’d recommend starting with:
- Optimizley – The big name in the AB testing industry has recently added functionality for mobile apps.
- Apptimize – Solid tool with a cool feature flagging tool to show certain users new features.
- Taplytics – User friendly, doesn’t require any coding.
Check out this great Autosend blog for a list of 15 A/B testing tools for mobile apps.
With the cost of acquiring customers increasing, A/B testing is without a doubt the most cost effective method of keeping the customers you have and increasing their value to your business.
This is what we call the low hanging fruit.
Without consistent improvement, your app users will leave you. Conduct tight A/B tests and give them a reason to stick around!