Google has its own market research panel called Screenwise Trends Panel that lets Google monitor how people use their desktop browser and mobile apps. According to Google, this panel will help answer questions such as
- What different technologies do people own and what do they use them for?
- How does the arrival of new technology affect media use?
- Do people notice advertising, and how relevant or irrelevant do they find it?
- How popular are different media activities among different types of people?
These are great questions to explore, but they all pertain to people in general, and Google’s research panel may differ from people in general in a number of ways: (1) They probably like Google products more than most people, and (2) They probably have less privacy concerns, since they are letting Google monitor their Internet activity and app usage. So can Google’s research panel tell Google anything about people’s technology usage given these differences?
Conducting controlled experiments is the best way of determining whether a site or app redesign would lead to improvements on key metrics. One barrier is the amount of time or resources it takes to run experiments. You may have a low traffic site, you may want to detect small differences in key metrics (i.e. fractions of a percent), or you may want to get experiment results faster. Here are some suggestions on how to run experiments more efficiently.
Because clickstream data is so readily available, we can easily measure/quantify users’ behaviors on websites, apps, etc. So is there any point in conducting surveys to get inside user’s heads, to measure their attitudes and preferences? After all, the best predictor of behavior is past behavior, not self-reported attitudes.
Surveys can nonetheless answer questions about our users that clickstream data cannot. Here are some scenarios:
When it comes to surveys, having good quality data encompasses many things like having a high response rate (see blog post here). But it also encompasses having representative sampling. The data should be like a chip off the old block; it should be just like your users as a whole in composition but on a smaller scale.
Some people believe that “bigger is better” when it comes to sample size – the more survey respondents you have, the more trustworthy your results.
True, a bigger sample gives you more precise estimates, which is necessary for your results to be trustworthy. It also gives you more statistical power to detect differences between estimates and a benchmark, or differences between control vs. treatment.
But a bigger sample is only necessary and not sufficient for results to be trustworthy. You also need to correct for nonresponse error, or the bias in survey results due to non-respondents having different characteristics from survey respondents.
Imagine a company that sells a line of products and services. This company will likely have multiple goals for its website:
- to sell its products and services online
- to collect user information for sales prospects
- to drive brand awareness and loyalty
- to provide online support for existing customers
Let’s say the company has identified 20 KPIs (Key Performance Indicators) that measure the success of these four goals, and it is committed to optimizing the conversion of these goals by running many experiments. Should the company launch the treatment if some KPIs perform better but others perform worse than the control (i.e. the original site)?
To know how your website’s doing, you need to define your Key Performance Indicators. If you’re running a blog (or some sort of content publishing site), and your goal is to increase user engagement, your Key Performance Indicators may include number of visits, pageviews per visit, and visit duration.