Because clickstream data is so readily available, we can easily measure/quantify users’ behaviors on websites, apps, etc. So is there any point in conducting surveys to get inside user’s heads, to measure their attitudes and preferences? After all, the best predictor of behavior is past behavior, not self-reported attitudes.
Surveys can nonetheless answer questions about our users that clickstream data cannot. Here are some scenarios:
When it comes to surveys, having good quality data encompasses many things like having a high response rate (see blog post here). But it also encompasses having representative sampling. The data should be like a chip off the old block; it should be just like your users as a whole in composition but on a smaller scale.
Some people believe that “bigger is better” when it comes to sample size – the more survey respondents you have, the more trustworthy your results.
True, a bigger sample gives you more precise estimates, which is necessary for your results to be trustworthy. It also gives you more statistical power to detect differences between estimates and a benchmark, or differences between control vs. treatment.
But a bigger sample is only necessary and not sufficient for results to be trustworthy. You also need to correct for nonresponse error, or the bias in survey results due to non-respondents having different characteristics from survey respondents.
#1 – Only survey those you want to survey.
- I recently got a survey asking me to rate my satisfaction with one of the website’s products even though I haven’t used it; I’ve only read about the product on their website.
- In a user satisfaction survey, only survey those who’ve used your product, or at least make a response option like “Not applicable” available. Better yet, survey those who’ve used your product several times so that they have more of an informed opinion.
- Sounds obvious, but the website that made this mistake is arguably the most well-known website in the world. Sometimes it’s the obvious that gets overlooked.