Skip links

TAKING THE STRESS OUT OF POLITICAL MESSAGE TESTING POLLS

The University of Virginia Center for Politics sponsored recently a study of message testing polls, as part of its commitment to ensuring “Politics is a good thing!” As a service to our Crystal Ball readers, we now present the preliminary results of this research.

As we approach another election season, there is the possibility that you will receive a telephone call and be asked to participate in a political message testing survey. Campaigns will present both negative and positive views about the candidates to determine what messages might change your vote. If you find this annoying, you will not be alone. Political message testing polls are potentially problematic. They generate complaints from respondents, negative attention in the popular press, denunciations from political opponents and operatives, and criticism from academic and media pollsters.

Political message testing polls are frequently confused with Push “Polls” (high-volume political advocacy calls conducted under the guise of a survey), a practice that is condemned by industry organizations such as the American Association for Public Opinion Research (AAPOR), the National Center for Public Polls (NCPP) and the American Association of Political Consultants (AAPC). These organizations routinely receive complaints about “push polls” that turn out, on closer examination, to be message testing polls.

Some observers have suggested that message testing polls also present ethical issues, in that they may not always adequately secure informed consent from respondents. There is evidence that some such polls generate large numbers of break-offs from annoyed respondents, a pattern which could threaten the accuracy of the collected data, and suggests (along with the complaints and press criticisms) that significant numbers of respondents are unsatisfied with their experience. Any practice that leaves large numbers of respondents unhappy is obviously a concern to the survey industry generally.

Our research was predicated on the idea that, if properly designed, legitimate message testing polls could be carried out with fewer negative consequences for respondents, researchers, and the industry as a whole. It is probable that some of the best practices in this area are already in use by certain firms, while others use practices that have greater potential for negative consequences. In brief, it was our goal to start identifying design features that would mitigate problems with respondent reactions.

In December 2006, partners from a number of leading political polling firms participated in a conference call with the senior author (who was AAPOR Standards Chair at the time) to discuss these issues. These practitioners affirmed that message testing is a vital part of any serious campaign consultancy. They believe strongly in the legitimacy of the technique in general, and are at pains to distinguish legitimate message testing from “push polls.” Consensus emerged that any proposal to change practice in this area would have to be based on scientific evidence that shows some design features to be both effective and relatively free of negative outcomes.

With the support of a grant from the Center for Politics, we undertook a first foray into this kind of research. The vehicle was a message testing poll about the generic 2010 Congressional election. (A generic poll does not name specific candidates, asking instead if the voter favors ‘the Democratic candidate’ or ‘the Republican candidate.’) Three features of the questionnaire were manipulated in a large (n = 2,500) Internet survey fielded (as a service to AAPOR) by Polimetrix/YouGov in March 2009. The survey instrument was developed to be as realistic as possible, with genuine, contemporary partisan messages about Democrats and Republicans in Congress. The instrument also included a closing interrogatory in which respondents assessed the fairness and believability of the questions, whether they felt fully informed about the interview, their degree of concern with partisan use of the survey results, whether the interview was comfortable or stressful, and their willingness to be interviewed in the future. We are able to assess the respondent experience by looking at how these questions were answered under different questionnaire designs or ‘treatments.’

The experiment used a full factorial design with thirty-six treatments:

  • Three transitional introductions: comparison of no introduction of the message-testing task with two more-informative transitions
  • Unbalanced (all messages favoring one party) compared to a partially balanced version (which included a sprinkling of positive messages about the opposing party or negative messages about the favored party)
  • Three types of ‘test’ questions—that is, the queries used to get respondent reactions to each of the tested political messages.
  • Two political party versions (Republican and Democratic)

Feature One:

Some message testing polls start the interview with ordinary polling questions and then shift, with little comment, to testing strongly worded statements about the candidates. In our experiment, an abrupt transition from ordinary preference questions to persuasive or “push” questions was tested against more informative transitions that forewarn the respondent.. The abrupt transition was the ‘control’ treatment, asking the respondent “When you hear the following statements, does knowing about this make you more likely or less likely to vote for this candidate?”

The first transitional treatment prepared the respondent for the type of messages that would follow, “Here are some statements you might hear from a political candidate running for office.” The second transitional treatment was much more detailed using phrases such as, “…you might not agree with these statements…some are negative….these statements could cause some people to react strongly…”

Feature Two:

The sequencing and degree of balance between positive and negative messages about the dueling parties was varied. In the unbalanced design, respondents heard a series of positive statements about the favored candidate’s party and then two series of negative statements about the opposing candidate’s party that became increasingly more intense. The partially balanced design was still weighted toward the favored party but presented a mix of positive and negative messages about each party, thus giving at least some impression of impartiality.

Feature Three:

The questions that were asked after each persuasive item were varied, ranging from questions about changed voting intention to less direct questions that ask about the convincingness, believability and importance of each item.

  • The control test question asked: “Does knowing this make you more likely or less likely to vote for the candidate…how strongly do you feel about that?” We reasoned that this question, repeated after each message, might make respondents feel “pushed” to change their opinions.
  • The first treatment questions asked instead, for positive statements: “How convincing is this statement as a reason to vote for this candidate?” For negative statements: “How serious a doubt does this statement create about your voting for this candidate?”
  • The second treatment questions ask: “How believable do you think this statement is?” and “For you as a voter, how important is it for you to know this information?”

Results

This initial Internet survey experiment produced strong results. Respondent experience, as measured by their subjective ratings, was strongly affected by some of the design features that we manipulated.

The results suggest that the experience was the most stressful for respondents when the respondent’s party was a mismatch with the questionnaire (a Republican receiving the Democratic version, etc.) or when the respondent was unaffiliated. (Partisans who heard a version of the survey trashing the opposing party did not react unfavorably to the questions.)

The results suggest that a better designed survey can improve the ‘mismatched’ respondent’s feelings of fairness, believability, meeting expectations, concern about the use of results to aid the opposing party, and future willingness to participate in this type of survey. This more favorable respondent experience was created by providing some balance of positive and negative statements about the opposing party, implementing different types of “test” questions that sound less “pushy,” and including a transitional introduction that makes the partisan statements less unexpected for the respondent.

The following graph shows the effect of a more balanced design and alternate test questions on the level of stress on the respondent. When these modifications are implemented, respondents found the interview to be less stressful and were more likely to say they were “very comfortable” or “somewhat comfortable” during the interview.



This second graph also shows the positive effect that modifications to balance and test questions can have on the expectations of respondents. Respondents were more likely to say that the interview was close to what they expected or exactly what they expected. If expectations are met, respondents are more likely to consider the experience to be a positive one and are more willing to participate in similar interviews in the future.



Further study of the observed effects in a real campaign with more personal messages and an actual phone survey of cold-called voters would provide further confirmation of the strength of these initial results. As we continue our work, we will continue to involve political polling firms in the design of the experiments. Campaign professionals have already proven receptive to methodological research results that can help them refine these vital tools and avoid potentially damaging fall-out from respondent complaints. So perhaps the next time you are called and asked how you feel about some campaign claims, the experience will be less unpleasant than it might have been without the benefit of this kind of research.

Thomas M. Guterbock is director of the Center for Survey Research at the University of Virginia. Deborah L. Rexrode is a staff research analyst at the Center for Survey Research at the University of Virginia.