top of page

Response Bias in Surveys: A Behavioral Science Perspective

  • Writer: Elena Talavera
    Elena Talavera
  • Apr 10
  • 5 min read

Updated: Apr 10

When people answer to surveys, they do not always respond with honesty or accuracy. Sometimes, their answers are skewed with subconscious biases, survey design flaws, or even in the desire for appearance favorable. Researchers call this "response bias", which is a total tendency to answer in ways that distort the true data (Tourangeau et al., 2000). These biases, not like random errors, show plain trends; they skew data from polls to reviews to studies.


Main Response Bias


The most common types of survey bias are as follows.


Social Desirability Bias

Based on the self-presentation theory (Goffman, 1959), social desirability bias leads enough respondents to answer within ways they believe shall be viewed favorably. Paulhus (1984) identified two components: self-deceptive enhancement (unconscious exaggeration of positive traits) as well as impression management (conscious tailoring of responses). For instance, respondents could underreport consumption of alcohol by 20-30% in health surveys (Tourangeau & Yan, 2007).


Acquiescence Bias

According to Krosnick's (1991) satisficing theory, respondents generally adopt mental shortcuts when surveys are cognitively demanding. One shortcut involves acquiescence bias. People tend to agree with statements regardless of content. This occurs mostly because agreeing is cognitively easier. Critically evaluating each item is harder (Krosnick & Berent, 1993).


Midpoint Bias

Respondents may also select neutral or midpoint options to reduce cognitive effort, especially in lengthy or complex surveys. This behavior is a way to avoid the mental strain of forming a definitive opinion and becomes more common as fatigue sets in (Krosnick, 1991).


Primacy & Recency Bias

The order in which response options are presented can also skew results. Primacy effects lead respondents to choose early-listed options, while recency effects cause a preference for later options. These patterns stem from limitations in memory and attention (Schwarz & Hippler, 1991). Respondents disproportionately remember and report recent events while underreporting distant ones (Tversky & Kahneman, 1973). In customer satisfaction surveys, experiences from the past week may be overrepresented while important older experiences are neglected (Menon, 1993).


Non-Response Bias

Groves and Couper (1998) demonstrated that non-respondents often differ systematically from participants across demographic and attitudinal variables. In organizational surveys, this can lead to overestimations of employee satisfaction by 10-15 percentage points when disengaged employees opt out (Rogelberg & Stanton, 2007).


Cultural Response Bias

Systematic differences can be found in response styles across cultural groups. For example, Asian respondents show 30% higher rates of midpoint selection compared to Americans (Chen et al., 1995), while Mediterranean respondents exhibit 25% more extreme responding (Harzing, 2006).


Sponsorship Bias

Respondents can alter answers based on perceived survey sponsor. For example, a study found that when a survey was identified as coming from a pharmaceutical company, reports of medication side effects decreased by 40% (Singer et al., 1998).


Overclaiming Bias

Based on overconfidence bias, approximately 30% of respondents claim familiarity with made-up concepts to appear knowledgeable (Paulhus et al., 2003), which can lead to distorted responses.


Mode Effect Bias

The survey medium matters as it can affect response patterns. For instance, sensitive behaviors are reported 30% more often in web surveys than phone interviews (Kreuter et al., 2008).


Approaches to Bias Mitigation


There are three main ways to mitigate survey bias.


Questionnaire Design Strategies


Research supports several evidence-based design solutions:


  • Indirect questioning reduces social desirability effects by 40% for sensitive topics (Tourangeau & Yan, 2007).

  • Unipolar scales (0-10) demonstrate 25% lower extreme responding than bipolar scales (Krosnick & Berent, 1993).

  • Item randomization eliminates order effects that can bias responses by up to 15% (Schwarz & Hippler, 1991).

  • Finally, pilot testing to test a survey content validity can uncover 85% of problematic items before full deployment (Willis, 2005).


Administration Methods


Web-based surveys with progress indicators have shown to reduce break-off rates by 30% compared to paper surveys (Couper, 2008). However, telephone surveys yield 18% higher completion rates for sensitive topics when interviewers establish rapport (Groves et al., 2009). It is also important to monitor response patterns (e.g., straight-lining or speeding) to detect behaviors in real time and ensure a good completion rate.


Statistical Corrections


Post-hoc weighting techniques can reduce non-response bias by up to 50% when adequate auxiliary data exists (Little & Rubin, 2002). For acquiescence bias, confirmatory factor analysis with method factors accounts for 60-70% of variance in response styles (Billiet & McClendon, 2000). Finally, it is important to ensure minimum statistical cut-off points, such as Cronbach’s alpha (> 0.7) or confirmatory factor analysis (CFI > 0.9).


Conclusion


Understanding response bias requires more than theory—it demands the right tools and insights. At the Center for Behavioral Decisions, we integrate psychometric rigor with behavioral science expertise to create validated instruments that minimize measurement error and deliver actionable insights. Let’s talk about how we can support your research goals. Reach out at hello@becisions.com to schedule a consultation.


References


Billiet, J. B., & McClendon, M. J. (2000). Modeling acquiescence in measurement models for two balanced sets of items. Structural Equation Modeling, 7(4), 608-628. https://doi.org/10.1207/S15328007SEM0704_5

Chen, C., Lee, S.-Y., & Stevenson, H. W. (1995). Response style and cross-cultural comparisons of rating scales among East Asian and North American students. Psychological Science, 6(3), 170-175. https://doi.org/10.1111/j.1467-9280.1995.tb00327.x

Couper, M. P. (2008). Designing effective web surveys. Cambridge University Press.

Goffman, E. (1959). The presentation of self in everyday life. Doubleday.

Groves, R. M., & Couper, M. P. (1998). Nonresponse in household interview surveys. Wiley.

Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey methodology (2nd ed.). Wiley.

Harzing, A.-W. (2006). Response styles in cross-national survey research. International Journal of Cross Cultural Management, 6(2), 243-266. https://doi.org/10.1177/1470595806066332

Kreuter, F., Presser, S., & Tourangeau, R. (2008). Social desirability bias in CATI, IVR, and web surveys. Public Opinion Quarterly, 72(5), 847-865. https://doi.org/10.1093/poq/nfn063

Krosnick, J. A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5(3), 213-236. https://doi.org/10.1002/acp.2350050305

Krosnick, J. A., & Berent, M. K. (1993). Comparisons of party identification and policy preferences: The impact of survey question format. American Journal of Political Science, 37(3), 941-964. https://doi.org/10.2307/2111577

Little, R. J., & Rubin, D. B. (2002). Statistical analysis with missing data (2nd ed.). Wiley.

Menon, G. (1993). The effects of accessibility of information in memory on judgments of behavioral frequencies. Journal of Consumer Research, 20(3), 431-440. https://doi.org/10.1086/209361

Paulhus, D. L. (1984). Two-component models of socially desirable responding. Journal of Personality and Social Psychology, 46(3), 598-609. https://doi.org/10.1037/0022-3514.46.3.598

Paulhus, D. L., Harms, P. D., Bruce, M. N., & Lysy, D. C. (2003). The over-claiming technique: Measuring self-enhancement independent of ability. Journal of Personality and Social Psychology, 84(4), 890-904. https://doi.org/10.1037/0022-3514.84.4.890

Rogelberg, S. G., & Stanton, J. M. (2007). Understanding and dealing with organizational survey nonresponse. Organizational Research Methods, 10(2), 195-209. https://doi.org/10.1177/1094428106294693

Schwarz, N., & Hippler, H. J. (1991). Response alternatives: The impact of their choice and presentation order. In P. Biemer, R. Groves, L. Lyberg, N. Mathiowetz, & S. Sudman (Eds.), Measurement errors in surveys (pp. 41-56). Wiley.

Singer, E., Von Thurn, D. R., & Miller, E. R. (1998). Confidentiality assurances and response. Public Opinion Quarterly, 59(1), 66-77. https://doi.org/10.1086/269465

Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The psychology of survey response. Cambridge University Press.

Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin, 133(5), 859-883. https://doi.org/10.1037/0033-2909.133.5.859

Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5(2), 207-232. https://doi.org/10.1016/0010-0285(73)90033-9

Willis, G. B. (2005). Cognitive interviewing: A tool for improving questionnaire design. Sage.

Comments


bottom of page