Continuously A/B testing or making changes to the site is something many companies do today with the hope of improving the experience and increasing conversions.
When we work with different companies, we often see similar problems. From a technical perspective, it has become quite easy to do A/B tests or make changes on sites, and A/B tests can therefore be done continuously. This is, of course, positive, but the results rarely live up to expectations. One of the most common reasons for this is that the hypothesis being A/B tested is based on opinions or gut feelings. If the hypothesis is not based on data from the company's specific customers, the results will generally not create an improved customer experience for these customers. The hope of getting quick results often leads to very weak results, mainly because the important preliminary work has been deprioritized.
The problem with deprioritizing the preliminary work is that it often leads to conclusions based on incorrect grounds, which is a waste of both time and money. For example, one might conclude that it is not possible to improve a certain page or flow further, which is rarely true.
The difference between a hypothesis and an idea is that you know where a hypothesis comes from, it defines a solution, and leads to a result.
Below are 5 things you must do to create better hypotheses and gain valuable insights for increased conversion.
You need to remind yourself of this all the time because it is easy to fall back on your own ideas for improvements. Start the hypothesis objectively from a blank slate - assume you know nothing about why your users behave the way they do.
Many companies think from the inside out when it comes to content on their site. They make rational arguments about how the user should navigate. But users rarely do the most rational thing! Most are emotionally driven and make quick, unconscious decisions. This is partly because that is how our brain works, but it also often depends on the situation the user is in when they visit a site. They might be doing something else at the same time as visiting your site or have distractions around them. There is no time to explore and search for what they are looking for!
It is therefore almost impossible to reason your way to a solution because there is probably no rational action behind it. When you realize that we all operate on autopilot, i.e., users make irrational decisions, you have come a long way in creating better hypotheses.
It is an excellent idea to test a hypothesis through an A/B test, but to draw data-driven conclusions from an A/B test, more focus on the preliminary work is needed, as previously mentioned. The preliminary work is important if you want to avoid basing the hypothesis on gut feelings or something you got a tip about. This may seem like it takes more time, but in total input versus output, you will save this time many times over. The most important thing is that it ultimately contributes to increased conversion and improved customer experience based on customer behavior.
This may seem like it takes more time, but in total input versus output, you will save this time many times over.
First and foremost, find out which page or flow on the site has the biggest problems. Start by looking up data in existing tools you have access to - preferably both a web analytics tool (e.g., Google Analytics or Adobe Analytics) and a heatmap tool (e.g., Hotjar or Crazyegg) that can help answer what is converting poorly on the site.
A heatmap tool shows how users move on the site - both how they click and scroll. It is a great complement to web analytics tools because it can help determine which data to look at and where to set up tracking in the future. But this data does not answer why users have a certain behavior.
To answer the question of why a more qualitative method than a heatmap tool is required. To obtain qualitative data, we at Conversionista use usability tests. The method is very valuable for gaining insights into motivation, opinions, and needs. Compared to A/B tests, this is something fewer companies do. It can be harder to see it as well-invested time because it is not as black and white as an A/B test.
A common problem is that product development often THINKS it knows more about its users than it actually does:
Let's take an example:
An e-commerce company has seen in its quantitative data (e.g., Google Analytics) that very few people click on their "buy button." They have previously tested different texts on the button, but nothing improves the conversion.
The quantitative data answers the questions of where and what the problem is, but to find out why users are not clicking on their buy button, qualitative data is needed. This is a perfect opportunity to conduct usability tests and gain insights because you can find out why visitors are not clicking on the buy button. A classic mistake is that the text on the button is not adapted to the user's language but is more internal in nature. Based on qualitative data from the usability tests, the company can then create a data-driven hypothesis that they can A/B test.
(Read more about usability tests here)
It is easy to get stuck in the details, as in the example above with the text on a button. Sometimes it can be a small change that is needed to simplify things for users, but often it is a completely different problem. It is important to lift your gaze to see the overall user experience for a specific flow or page.
Let's take an example where it can go very wrong
Another e-commerce company thinks it is about the text on a buy button, but in reality, visitors do not understand the product or the purpose of the page at all and give up. Because visitors do not understand the product, they are not interested in taking any action - regardless of what text is on the buy button. The company conducts several A/B tests but eventually gives up and settles for a low conversion rate for that page and focuses on something else they think visitors are more interested in. The e-commerce company has then spent a lot of time on something that has not yielded any results or new insights into users' needs. This is one of the risks when a data-driven hypothesis based on both qualitative and quantitative data is not created.
In this example, a usability test with users could provide very valuable information. Users could then answer the following questions:
In a usability test, you not only get answers to the questions asked during the test but also the user's reactions when they navigate the page. By looking at their behavior and facial expressions and reading how they navigate the site, we can gain very valuable insights from which we can then create a data-driven hypothesis. At Conversionista, we also use eye-tracking as a complement to what the person testing tells us during the interview, but it does not replace the questions.
It is important to understand that data-driven hypotheses can also be created and tested during the development of a digital product. Before a digital product is launched, it is difficult to obtain quantitative data, but it is perfectly possible to obtain qualitative data by conducting usability tests on prototypes. By conducting ongoing usability tests early in the development process, it becomes a much more user-friendly experience with higher conversion when the product is finally launched.
Exploratory - What should we actually do?
Formative - Are we heading in the right direction?
Summative - Did it work?
What you need to do is define which stage your product development is in to determine which approach suits you. Hypotheses can be used for more than just A/B tests; use them to structure your research as well. Regardless of which stage you are in, you increase your chances of improving the user experience on your site if you learn to become really good at conducting a usability test. Here you can learn how to become a better test leader.
Good luck with creating strong hypotheses!
To ensure you have created a strong hypothesis, you can always use our hypothesis creator.