A research method called A/B testing allows people conducting experiments to evaluate two different variations of a website to determine which one gets better results. The method can be used with things other than websites, but for the case of this class, this blog will focus on websites.
According to uxdesign.cc, the procedure is used for several reasons. The main reason is to figure out how to get better conversion rates on certain elements. For example, if a developer is creating an advertisement for the website, he or she may want to create the ad using red, but someone else on the team may suggest creating the ad in green. So, an A/B test could be setup to see which gets better results.
First, the team creates a hypothesis based on website analytics and how visitors historically act with advertisements. After developing a hypothesis such as, “A red advertisement will generate more clicks than a green advertisement,” the team will create two versions of the site and place a green advertisement on one and a red advertisement on the other.
Then, the team will identify the targets and create a criterion for the test. So, they may look at how quick the advertisement was clicked, how many people clicked on them, how long a person stayed on the link the advertisement went to after clicking on it, or other things.
The important thing to note, according to uxdesign.cc, is that only one element be changed per experiment.
After criteria is created, the team then determines how many people must visit the site for it to be effective, and how long to run the experiment. Uxdesign.cc suggests running the experiment for 10 to 14 days, minimum and to not stop prematurely.
Once the experiment is setup, an algorithm should be setup to direct half of the traffic to one site and the other half to the other site. When the experiment is over, the team should do compare the results of two identical sites, and if the results match, A/B testing can be conducted because the data is reliable. If the data on the A/A test comes back dramatically off, uxdesign.cc suggests not moving forward because the data would be unreliable.
I looked at two studies where A/B testing was conducted.
A/B Testing was used to decide which option of two was the best for the Sanoma Entertainment Oy’s websites. Specifically, they were looking at how advertising performance could be improved, whether banner or some other variation. They determined that people responded better to the addition of one word to the header than any other option. They also state that “A/B testing is not only a great way to improve conversion rates or the performance of a website, but also fairly easy and surprisingly exciting.”
The use of A/B testing was also researched by researchgate.net to see if internet and online connectivity of client software, website and other services provided a solid ground for testing methodology and found that online experimentation allows people the benefit of setting an experiment up online. This makes it easier to scale the experiment up or down and evaluate ideas quicker.
Overall, I can see A/B testing being useful when determining how users behave on a website before doing a redesign. For my project this semester, I am redesigning a local news website. If I had time, A/B testing would be worth the effort.