We've been running an A/B test for a while at CloudApp and the results have been unexpected (in a bad way). The results for the variant, matter what tweaks we did or how we sliced the data, wasn't matching up with our hypothesis.
We couldn't understand why things weren't going the way we had expected. We made the change to improve a specific activation metric but it was driving the metric down.
Then we looked back at our original assumptions.
We looked at what assumptions lead us to believe that the experiment would be a success. Why had we originally thought that the change was going to be a good one? Then, we started poking holes in those assumptions and came out with some good insight.
We had assumed that our user was at a different stage while experiencing the experiment than they probably were in reality. So, even though we thought we were communicating the value of our product better than before, the user wasn't in the right mindset yet to understand it.
Only then, after we had examined our assumptions and challenged them, were we able to understand why our expectations didn't match up with reality.
When your expectations and reality don't match up, challenge your assumptions. Ask why you assumed what you did. What lead you to those assumptions in the first place? Then, challenge them. Play devil's advocate, ask a whole lot of "What if?" questions, and then dig deeper.
Sometimes (usually) it's your assumptions, and not reality, that is wrong.
Commentaires