One medical anthropologist in Malawi encountered a grey area in the self-reported data gathered by a knowledge, attitude and practice (KAP) survey. The survey, which investigated issues of malaria during pregnancy, was used to interview 248 respondents; it was complemented with qualitative research, including interviews, focus groups and participant observation.
A notable difference surfaced between the quantitative and qualitative results as they pertained to one particular line of inquiry. The survey asked about the quality of service at a local antenatal clinic. Recorded responses were largely positive. But during in-depth interviews, mothers voiced their criticisms of the clinic’s services.
The researcher took a guess at what might help to explain the discrepancy: perhaps mothers assumed that the survey was being conducted on behalf of the health center itself, and that a negative response might impact the treatment they would receive in the future. More generally, the researcher posited, Malawians are simply “a polite people” and “dislik[e] the idea of conflict.” In the absence of further probing, it would be unsurprising for these mothers to choose the ‘kinder’ response. Respondents were demonstrating social desirability bias — a tendency to say what we think others want to hear.
If the researcher had taken the results of the survey at face-value, a program might have focused on other issues while assuming that mothers were happy with the services. What people say isn’t always what they think. Uncovering the determinants of attitudes and behaviors requires continuous digging, approaching questions from multiple angles, and calling surface-level and initial responses into question to see what is buried beneath.