The Validity of Google’s Screenwise Trends Panel

Google's screenwise trends panel

Google has its own market research panel called Screenwise Trends Panel that lets Google monitor how people use their desktop browser and mobile apps. According to Google, this panel will help answer questions such as

  • What different technologies do people own and what do they use them for?
  • How does the arrival of new technology affect media use?
  • Do people notice advertising, and how relevant or irrelevant do they find it?
  • How popular are different media activities among different types of people?

These are great questions to explore, but they all pertain to people in general, and Google’s research panel may differ from people in general in a number of ways: (1) They probably like Google products more than most people, and (2) They probably have less privacy concerns, since they are letting Google monitor their Internet activity and app usage. So can Google’s research panel tell Google anything about people’s technology usage given these differences?

A look at the history of market research helps answer the question. When online research panels were first introduced, many were skeptical because at the time many residents didn’t have Internet access and so conducting an online survey would exclude many respondents. In contrast, almost everyone had a landline, so phone surveys were seen as superior (because it had higher coverage). Although the issue of coverage error remains, responses from online panels have become widely accepted.

As it turns out, coverage errors do not necessarily have serious implications in practice. Many researchers have conducted parallel phone and online surveys and found that responses between the two were strongly related across a number of areas, which suggests that coverage error does not necessarily bias results. Furthermore, a lot of research does not require full coverage to obtain valid results. Ad testing and taste testing studies often recruit from central or mall locations with the implicit assumption that what’s liked by study participants will be liked by others as well. In experimental psychology, the large majority of research participants are psychology undergraduates with the assumption that the psychological processes for these students are the same as people in general. It is therefore reasonable to assume, for example, that if people on Google’s research panel use their mobile devices and their desktop for different purposes, other people would as well. Nonetheless, Google should compare the panel data with other sources of data with fuller coverage (e.g. digital analytics) to examine the impact of the coverage error in their research panel.

But coverage error does bias results when excluded respondents differ from survey respondents on key metrics. A famous example is when Literary Digest conducted a poll in 1936 to see who would win the presidential election. It predicted that Landon would win the election with 57% of the votes even though Roosevelt won in a landslide. Literary Digest got it so wrong because of coverage error. It got its respondents from mostly affluent Americans who tended to favor the Republican party (i.e. Landon). Applying this example to Google’s research panel, certain key metrics may not be valid because they are likely related to coverage error. For example, the panel may not reflect the impact of new technology on people’s concern for privacy (since people on the panel likely have low privacy concerns) or how people respond to new Google products (since panelists probably like Google more than the average population).

Advertisements