Considering Consumer Research

September 28, 2020 | Letters

Author: Dr. Courtney Bir, Department of Agricultural Economics, Oklahoma State University 
attendees big for items at an auction

Mis/disinformation is everywhere, as we learned from Cami Ryan’s recent letter. Certainly there is a recently renewed interest in evaluating source material and drawing our own conclusions. When evaluating consumer studies — either the original research or popular press articles — there are a few things to keep in mind before drawing conclusions, or accepting the conclusions others have drawn for you.

The first thing to consider is who is doing the research and/or who is presenting the findings. Even good research can be presented or interpreted poorly. Does the group presenting the findings or conducting the experiment have their own motives? Even trained researchers who are seeking unbiased answers have to take a number of precautions to mitigate biases and carefully consider how the question is framed. Be careful to consider the results and conclusions through multiple lenses.

There are many methods that can be used to elicit consumer preferences. They all have their own pros and cons and range from in-person experiments to scraping online data. When people think of in-person experiments, they often think about live auctions. A group of people are recruited to participate and real money is exchanged for a product. The benefit of this type of experiment is that actual money and real products are used, which decreases hypothetical bias. Hypothetical bias occurs when the transaction is not ‘really happening’; it’s not a real purchase. People often have trouble internalizing hypothetical purchases and are likely to overstate their willingness to pay because they are not truly considering giving up the money and seeing the impact on their overall budget.

There is no such thing as perfect. Potential for bias exists in (basically) all framing and methods. The job of a good researcher is to minimize biases and to be upfront about limitations. Cons for in-person experiments include difficulty recruiting a diverse or representative sample, and cons for online recruitment include potential sampling biases for those without ready internet access. You can’t sell a product that doesn’t exist, so new products/attributes that don’t exist in real life are not good candidates for real exchanges and are necessarily hypothetical.

The sample is important. Does the sample studied mirror the population they are drawing conclusions about? Did they include people from only a narrow income bracket but make conclusions about residents with widely varying incomes? Is the sample representative in geography, education and gender? There is a role for state-specific assessments, or convenience sampling, but it is important that people are up-front about the potential shortcomings.

Scanner data is one increasingly popular way to glean insight into purchasing of a large diverse sample of shoppers in which actual purchases are recorded. You are unable to evaluate preferences for hypothetical products because they don’t exist in the store; however, you do have the knowledge that someone purchased an item, but you don’t know why. For example, I like Annie’s brand cheddar bunny crackers and often purchase them. So, the researcher knows my demographic information (assuming I reported my own information when signing up for the shopper card instead of my husband’s, and that I’m not using someone else’s card) and that I purchase cheddar bunnies. Do I choose Annie’s brand because they are organic? USDA certified? Have no artificial flavors, no synthetic colors and are made with real cheese? You get the point … the researcher doesn’t know. But for the record, I purchase them because they taste like real cheddar and are adorable.

Surveys solve some problems because they are very controlled. A good researcher will carefully construct questions or hypothetical experiments in an attempt to minimize biases and other survey-related issues. But surveys have their own challenges like survey fatigue and non-response bias that can negatively impact the quality of the results.

Sample Size! How many people participated? You may not know how to do the calculations to determine statistical power, but you know if there are only five respondents, there’s not enough people to be drawing conclusions.

My take-home message for reading/interpreting consumer research is to 1) read carefully, 2) think critically about what can/cannot be said with the scope of the project and the sample size, and 3) consider the limitations (every study has them!), whether they arise from framing, the inherent beliefs of those conducting the research or attempts (or lack thereof) to mitigate biases and other data quality problems. It takes a lot of different approaches to answer even one narrow question, and consumers are always changing and evolving.

ConsumerCorner.2020.Letter.20