Scoop has an Ethical Paywall
Work smarter with a Pro licence Learn More
Top Scoops

Book Reviews | Gordon Campbell | Scoop News | Wellington Scoop | Community Scoop | Search

 

US Election Polls - To Weight Or Not To Weight

Weighting Pre-Election Polls for Party Composition:
Should Pollsters Do It or Not?


An essay and compilation of web resources by Alan Reifman, Ph.D., Associate Professor, Human Development and Family Studies, Texas Tech University
Original URL: http://www.hs.ttu.edu/hdfs3390/weighting.htm

September 9, 2004 -- We have seen a lot of polls thus far in the Bush-Kerry race and we're going to see a lot more. Often polls by two different survey outfits taken at the same time will show results in pretty stark disagreement. Literally as I write this, Rasmussen Reports has the race a virtual dead-heat (Bush 47.5, Kerry 46.8), while CBS News has Bush up by 7.

All pollsters try to obtain a random, representative sample of voters to represent the full electorate. In addition to vote choice (i.e., Bush, Kerry, or other), pollsters always ask respondents which party they align themselves with. These two measures -- candidate preference and party ID -- often show great overlap, with Republicans (R's) heavily going for Bush and Democrats (D's) heavily going for Kerry. However, people sometimes vote for the other party's candidate, so candidate preference and party ID are not identical.

Advertisement - scroll to continue reading

Are you getting our free newsletter?

Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.

One factor (among many) that may contribute to discrepancies between different outfits' polls in their Bush-Kerry margins, I will argue, is polling firms' different philosophies as to whether it's advisable to mathematically adjust their samples -- after all the interviews have been completed -- to make the percentages of D's and R's in their survey sample match the partisan composition that is likely to be evident at the polls on Election Day. The latter can be estimated from exit polls from previous elections, party registration figures (in states where citizens declare a party ID when registering to vote), and surveys.

(Another issue that often comes up in evaluating pre-election surveys, with which many of you may be familiar, is whether results are reported for "registered" or "likely" voters. That is a different issue from what is being discussed presently. Whether a pollster reports results for registered voters, likely voters, or both, weighting by party ID is a separate, independent decision.)

Exit polls from the three previous presidential elections yield the following percentages of the electorate comprised of self-identified D's, R's, and independents (from Zogby).

 

Democrats

Republicans

Independents

1992

34%

34%

33%

1996

39%

34%

27%

2000

39%

35%

26%

A series of ten national poll readings on party affiliation over roughly the last three years from the Los Angeles Times is available here (once the page comes up, you need to scroll down a bit). One additional source is Democratic pollster Stanley Greenberg, author of The Two Americas. He found, after conducting 15 national polls with an aggregate 15,045 voters from late 2001 until early 2003 and allocating "leaners" to the relevant party, that each major party had the allegiance of 46% of the voters.

The controversy occurs when a poll of, say, 1,000 voters shows a partisan composition vastly different from what we've come to expect. Should the pollster make statistical adjustments (described below) to make the party breakdown conform to more typical estimates, or should he/she just leave the numbers alone and report the findings? A good summary of the "back and forth" of this controversy is available in this article from earlier this summer. I will be referring back to this article.

The following two scenarios should illustrate the key issues. Before presenting the scenarios, I want to state that there are noted national authorities on either side of the "weight/no weight" debate (including Democrats on both sides and Republicans on both sides). Each reader should decide for him- or herself. How to actually implement a sample weighting is described in the Appendix.

SCENARIO 1

As noted above, most recent estimates of the partisan composition of the electorate suggest a rough balance between the number of voters leaning toward the D and R parties (i.e., "50/50 nation"), with the possibility that there might be slightly more D's than R's.

In his aforementioned book, Greenberg characterizes party identification as "... a form of social identity, not unlike ethnicity or race, with considerable durability over time" (p. 93). I would argue that individual-level stability should generally lead to population-level stability, although not perfectly so (e.g., from one presidential election to the next, some people pass away, others newly turn 18, immigrants become eligible to vote).

Suppose a pollster completes a survey and finds far more self-identified R's in the sample than D's. This happened in the Newsweek poll released in early September right after the R convention that gave Bush an 11-point lead over Kerry. Newsweek's sample contained 38% R, 31% D, and 31% I (discussed here and here). There would seem to be three plausible explanations for the higher-than-usual sampling of R's:


1. There was a sudden, massive shift in party ID after the R convention.

2. Given that Newsweek's polling was done on Sept. 2-3 (partially overlapping the convention), one could argue that more R's than D's would have made it a point to be home to watch the convention, thus making themselves more accessible to telephone interviewers; even after the final day of the convention, R's may have been more politically energized, making them more likely to agree to participate in a survey.

3. It could have just been plain, "old fashioned" sampling error -- just as a coin, with probabilities of 50% heads and 50% tails, can yield 60% heads in a sequence of flips, random sampling of households could have yielded excessive R's just by chance.

If Greenberg is correct about the stability of individuals' party ID, then the first of the three explanations (a sudden shift) seems unlikely. The fact that other recent polls besides Newsweek's have obtained samples with more R's than D's seems to go against the third explanation (chance). In any event, we would conclude that if the second or third explanation were the true "culprit," Newsweek's party breakdown would appear to be "out of whack" relative to the other aforementioned indicators.

(Again, to keep things bipartisan, the article I cited earlier as providing a good discussion of the "back and forth" of the controversy was itself prompted by a mid-summer L.A. Times poll in which it appeared there were way too many Democrats in the sample.)

It is at this point that pollsters face the choice of whether to adjust the numbers to match more typical estimates of the D-R distribution (i.e., count R's less and D's more), or just leave the sample alone.

In 2000, pollster Scott Rasmussen went with the "leave things alone" strategy, with the result that his firm forecast a 9-point Bush victory over Gore in its final pre-election poll. Rasmussen, to his credit, posted a candid summary on his website:

"Simply put, we had too many Republicans in our sample. For a variety of reasons, our firm has never weighted by party. However, if we had weighted the data before the election to include an equal number of Republicans and Democrats, we would have shown Bush leading by 2 points. Had we weighted our data to match the partisan mix reported by the Voter News Service on Election Night, we would have shown Gore leading by a point."

(Note: This document appears to no longer be available online.)

One would surmise that Rasmussen is probably weighting this year.

Pollster John Zogby has pioneered the art of sample weighting on party ID. Taking 1996 and 2000 together, he was the most accurate pollster in forecasting the two presidential elections. As he notes in this essay on his website, "My polls use a party weight of 39% Democrat, 35% Republican and 26% Independent."

Another apparent sign of polls that weight is that they should exhibit less volatility day-to-day or week-to-week than polls that don't weight.

SCENARIO 2

As previewed above, a number of prominent polling authorities would presumably argue that the Newsweek poll, with its larger-than-typical R composition, or the L.A. Times poll, with the unusually wide D edge, should be left alone and not "retrofitted" into some preconceived template of what the Election Day partisan composition will look like.

According to the "back and forth" article cited above:

"Andrew Kohut of the Pew Research Center for the People and the Press said that he once conducted a survey asking voters their party twice, four days apart, and that he found substantial differences in the responses."

Further, even though Democratic candidates sometimes come out with more favorable readings on party-weighted compared to unweighted polls, Ruy Teixeira, co-author of The Emerging Democratic Majority and operator of a "blog" related to the book, opposes weighting for party ID. In his September 5 entry, he writes:

"Does that mean I favor polls like this weighting their samples by party ID? No, I don't, because the distribution of party ID does shift some over time and polls should be able to capture this. What I do favor is release and prominent display of sample compostions [sic] by party ID, as well as basic demographics, whenever a poll comes out. Consumers of poll data should not have to ferret out this information from obscure places--it should be given out-front by the polling organizations or sponsors themselves. Then people can use this information to make judgements [sic] about whether and to what extent they find the results of the poll plausible."

If I had to argue for not weighting on party ID, I would make two points:

  • As embodied in Teixeira's quote, a "free market of ideas" should prevail. With as much data as possible being released with polls, consumers can reach their own conclusions. And, of course, pollsters acquire reputations over the years as to their surveys' accuracy in forecasting elections. Over the long run, this should serve as a check and balance on polling/statistical procedures that have led them astray.
  • Some pollsters who do not weight on party ID may weight on other demographic characteristics (e.g., sex, race/ethnicity), on the grounds that whether one is male or female is far more stable than whether one is a D or R. This may help eliminate some of the skew when people appear to be represented disproportionately.
  • Hopefully my essay has given you something to think about as the pre-election polls roll in. There are no definitive answers on how to handle these issues. For all we know, Election Day 2004 may have more R's voting than D's, a departure from the last three presidential elections. In that event, polls not weighted on party ID may end up being more accurate than those with weightings based on recent past presidential elections.

    My personal advice -- whether, to use a phrase from Senator John McCain, you're a "Republican, Democrat, Libertarian or vegetarian" -- is to interpret polls favorable to your chosen candidate in cautious terms. If the favorable poll turns out to be accurate on Election Day, you can exult, but if it was flawed, at least you won't be blindsided.

    Additional polling resources:

    Polling Report (compilation of poll results)
    http://www.pollingreport.com/

    Performance of sample surveys in forecasting true presidential vote in 2000
    http://www.ncpp.org/poll_perform.htm

    (On top of the list of media polls [e.g., Zogby, CBS, Harris] where it says "Election Results," those were the true values for entire population of voters)

    Letter by Dr. Reifman in the ISR Sampler (published by the University of Michigan's Institute for Social Research), arguing that the older concept of "quota sampling" and sample weighting are conceptually similar (when the document opens up, go to page 12 to see Dr. Reifman's letter and responses by two University of Michigan survey experts).

    *************

    This report is republished here with the permission of the Author.

    © Scoop Media

    Advertisement - scroll to continue reading
     
     
     
    Top Scoops Headlines

     
     
     
     
     
     
     
     
     
     
     
     

    Join Our Free Newsletter

    Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.