Some people believe that “bigger is better” when it comes to sample size – the more survey respondents you have, the more trustworthy your results.
True, a bigger sample gives you more precise estimates, which is necessary for your results to be trustworthy. It also gives you more statistical power to detect differences between estimates and a benchmark, or differences between control vs. treatment.
But a bigger sample is only necessary and not sufficient for results to be trustworthy. You also need to correct for nonresponse error, or the bias in survey results due to non-respondents having different characteristics from survey respondents.
Nonresponse error is a principal cause for concern because it is often ignored in survey research. Either people don’t know about it, or they mistakenly believe that it’s only a problem for small samples. But nonresponse error is immune to sample sizes. Its only remedy is boosting the response rate.
Correcting for nonresponse error requires a minimum response rate of 30% (Hill et al., 2007), but most surveys have only a response rate of 10% or less. Coming up with an effective way of boosting response rate will require experimentation to find out what works and what doesn’t. Here are some suggestions for experimentation:
There are many kinds of incentives you can offer for survey participation. Test which one is the most effective for boosting response rate.
– You can test the effectiveness of incentives that do not have a per-unit cost (e.g. a one-month free subscription).
– You can test whether having a grand prize draw or giving out smaller incentives is more effective.
– You can also give respondents the choice of donating to a charity instead.
Run controlled experiments to see which incentive yields the best response rate per dollar spent. This is a worthwhile investment if you’re conducting ongoing surveys (e.g. to track user satisfaction).
(2) Email invitation
Play around with the email invitation to see which version would be the most effective in getting users to click on the survey link (i.e. yield the highest click-through rate). If you’re offering incentives, the incentive description would be included here. Play around with how you describe and present the incentives. Also test whether the day and time an email invitation is sent matters – weekdays vs. weekends, morning or evening.
(3) Landing page of survey
Again, play around to see what would make it appealing for users. In general, users who click on your survey link want to take your survey right away, so your landing page should already have survey questions, not lengthy instructions.
(4) Email reminders
Email reminders will boost response rate, but it could increase the number of unsubscribes, so you should track the number of unsubscribes to assess whether email reminders is doing more harm than good in the long run. In general, you only want to send one or two email reminders.
(5) Make your survey friendly for all devices and browsers.
You need to make it convenient for users to take the survey on mobile or tablet. Make sure no device or browser has a significantly high bounce rate.
The goal is to get a 30% response rate. It will take time and effort, but boosting the response rate is the only way of correcting for nonresponse error and therefore have more trustworthy results.
Reference: Hill, Nigel, Roche, Greg and Allen, Rachel. 2007. Customer Satisfaction: The Customer Experience Through the Customer’s Eyes. S.1. : Cogent Publishing, 2007.