News Items from UNC Greensboro

A fall 2016 Research Magazine article

Fifteen years ago, Professor Sat Gupta brought up his favorite subject, RRT survey sampling, in his introductory statistics class. RRT, or Randomized Response Technique, is a practical approach to a common dilemma in survey sampling — the possibility that a respondent might lie.

“A face-to-face survey may lead to serious social desirability response bias,” explained Gupta. “It’s the tendency in respondents to give socially acceptable responses rather than true responses.”

RRT reduces that tendency in survey participants by allowing them to scramble their responses and maintain their privacy. This is particularly helpful, Gupta told the class, with embarrassing survey questions, like “Have you ever had an abortion?”

Suddenly, a student stood up and asked, “What makes you think that a woman would be ashamed of having an abortion?” Gupta was taken aback — and then inspired.

He realized that researchers had been limiting themselves with RRT by making assumptions about what participants would and would not find sensitive. What researchers needed was an optional RRT model.

A new model

In a commonly used RRT model, a researcher might have a participant draw a card from a deck. Some of the cards display the number 0, some display 1, some -1, and so on. The participant is instructed to add the number on the card to their answer to a question — for example, “How many sexual partners have you had?” The participant is able to respond without fear of judgment because the researcher doesn’t know what is on the card they have drawn and has no way to unscramble their individual answer.

However, the researcher does know what cards are in the deck — both the type of cards and how many. So he knows the probability that a participant is adding 1 to their answer, or -1, etc. Using that probability information, the aggregate answers provided by the survey participants, and sophisticated statistical modeling, the researcher can estimate the surveyed group’s average answer to the question of interest.

In Gupta’s Optional RRT model, the participant has an additional choice if they don’t find the research question embarrassing. They can draw the card, ignore its contents, and provide a straightforward answer to the researcher’s question. The researcher will not know that particular participant provided an unscrambled response. However, the pool of survey answers now contains unscrambled responses as well as scrambled responses, which, with the correct modeling, allows the researcher to estimate the average response to the research question with greater accuracy.

Seminal work

Gupta’s 2002 publication on Optional RRT became a landmark paper in the field.

“We proved that optional models are more efficient than their non-optional counterparts,” explains Gupta. “This idea has become very popular and a lot of papers have been written based on this idea.” In fact, the paper has been cited more than 100 times.

With more than 25 papers on this topic, Gupta has continued to refine the Optional RRT model. His recent work centers on unifying Optional RRT with the use of auxiliary variables. In the latest model, researchers collect sensitive information from participants using Optional RRT, but they also gather secondary, non-sensitive information. The trick? The secondary information — for example, responses to “How many relationships have you had?” — is statistically correlated with the primary, sensitive question.

Each evolution of the model brings researchers greater accuracy. Gupta’s impact is felt not just in his field but in every field using survey sampling as a tool.

“Survey Says,” by Anna Warner and Sangeetha Shivaji, originally appeared in the fall 2016 Research Magazine

Share This