Skip to main content

Set up and track A/B Tests

Updated over a week ago

A/B Testing, sometimes called split testing, allows you to experiment with different user experiences to optimize your campaigns over time. With A/B Testing, you can test and compare different variations of a campaign to determine which copy will yield the strongest performance based on your goals. This article explains how to set up an A/B Split in your experience flow and track its performance using Attributes.

Before you begin

  • Identify the goal for your A/B Test. For example, you might want to determine whether or not engagement with an intro screen leads to higher completion rates.

  • Plan the different variations of your experience flow you want to test. You could create two paths off of your A/B Split, sending 50% of traffic to an intro screen and the other 50% of traffic to the first question screen instead.

  • Have access to the builder map for your experience.

  • Consider using attributes to track the performance of each pathway beyond basic reporting.


Set up the A/B split

You can place an A/B Split anywhere you want within your experience.

  1. Right click on the '+' node in the builder map where you want to add the split.

  2. Select 'Split Traffic' from the menu.

  3. Select A/B test.

  4. Adjust the percentages to determine the traffic split for each pathway (e.g., 50% for each).

  5. Label each pathway clearly to identify the different variations you are testing.

  6. If you want to test more than two options, select 'Add another branch'.

  7. Click 'Create'.

  8. Connect the A/B paths to their respective screens in your experience flow.


Evaluate your test in Analytics

You can track the results of your A/B test directly within Analytics, using the Traffic Splits dropdown.

  1. Navigate to the Experience Dashboard in Analytics.

  2. Click the Traffic Splits dropdown in the sub navigation.

  3. Select the traffic split you want to evaluate. This filters the analytics to show only users who followed that path.

  4. Compare results by toggling between different paths.

Note: All traffic splits will be reported on in Analytics, not just AB traffic splits.


Commonly Run AB Tests

  1. Intro Screen vs No Intro screen

    This test is run to determine whether or not you receive a higher engagement rate in your experience based on whether or not users see an intro screen. The AB split is placed at the beginning of the experience, and routes users into the intro screen or straight into the first question.

2. Lead Capture Placement

This test helps determine where in the experience it is best to place a lead capture screen. To do this, you can utilize an AB traffic split upfront that routes traffic into either a lead capture screen or to question 1. Then add a Rules Based traffic split that routes traffic based on whether or not the user has clicked on the ‘submit’ button on the upfront lead capture screen. Splitting traffic in this way will ensure that users who saw the upfront lead capture screen, will not be directed into the lead capture screen based at the end of the experience.

Example: Suppose you want to test whether including an intro screen impacts engagement and completion rates. To do this, set up two paths: Intro and No Intro.Within the Traffic Split dropdown, you can select the Intro path to view the performance of only those users who saw the intro screen. To compare results, simply switch to the No Intro path by deselecting Intro and selecting No Intro.


Preview and publish

  1. Preview your experience to verify that the A/B split and attribute mapping are configured correctly.

  2. Publish your experience to deploy the A/B test.


Monitor performance of A/B split

After launching your experience, you can analyze the results to determine which variation performed best based on your goals.

  1. Navigate into the reporting dashboards.

  2. Monitor the results for your A/B split by viewing Analytics

  3. If you are unsure how to interpret your A/B Test data, reach out to our Support team for assistance.


Next steps

  • Determine a sample size or timeframe for collecting data on your A/B test.

  • Schedule a check-in to review initial results and performance insights.

  • Decide whether to optimize further, promote the winning variation, or iterate with a new A/B test.


FAQ

What is the best way to name Attributes to best understand the data from an A/B Test?

  • The attribute values you map should clearly describe the pathway or variation they represent. For example, using "No Intro" for the path without an Intro screen and "Intro" for the path that includes one makes the data easy to understand.

I set the traffic split on my A/B Test to send 50% of traffic to one pathway and 50% of traffic to another pathway, but it looks like traffic is skewing more towards one pathway over another. Why does this happen?

  • The A/B Split works the same way that a coin flip works. Each time you flip a coin, there is a 50% chance that you will get heads and a 50% chance that you will get tails. Each time you flip the coin, the chances of getting heads or tails resets again and has nothing to do with whether you got heads or tails on your last flip. In the same way, each time that a user reaches the A/B split, their chances of being sent down one pathway or another is 50/50, but that chance has nothing to do with any other prior user session. As the data set that you collect reaches a significant volume, you will see the trend between both pathways get closer to 50/50 but it may not ever be a perfect split.

I have low engagement scores?

  • We recommend you create an A/B split traffic test to see if users will engage with the first question. Then you can measure if capturing attention leads to higher completions.

Did this answer your question?