Following up on our Booker – Lonegan Numbers from October

Did the Rutgers-Eagleton Poll have a “Bradley Effect” in Our Final U.S. Senate Results?

Bear with us, this is a LONG post…

In our final pre-Senate special election poll, we had Newark Mayor (and now U.S. Senator) Cory Booker up 22 points over his opponent, former Bogota Mayor Steve Lonegan. The real-world results were different – Booker’s margin was “only” 12 points or so. At the time we speculated on many reasons that our numbers could have been off on the head-to-head question, especially given that we did not see significant levels of variance with other polls on questions such as Booker and Lonegan favorability ratings. We speculated some more the day after the election looking at turnout, but also noting that we wondered if the fact that Booker is an African-American may have played a role. We have since done some fairly complex statistical analysis to examine this question. The upshot is that we see a very clear “race/ethnicity of interviewer” effect in our data; that is, our Black and Hispanic interviewers got more “Booker” votes from among the white respondents they talked to than did our white and Asian callers. And, our white callers got fewer “Booker” votes among Black and Hispanic respondents than did our non-white callers.

This is a complex phenomenon that has previously been documented by researchers, in particular in the aftermath of the 1993 Virginia governor’s race when polls badly overstated support for Doug Wilder, the African American candidate who won, but by a much smaller margin than expected. This is commonly been called the “Bradley Effect“. The argument is that respondents “guess” the race of callers and some will then adjust their responses to conform to what the believe is the caller’s expectation. Whether or not that is exactly what happens, the fact is that the data in our case seem to show exactly that happening.

Now, is the effect enough to account for being off by 10 points? That’s harder to calculate. However, our call center is very diverse – among the 113 student callers working on that poll, 25% were white, 19% Black, 47% Asian, and 11% Hispanic. Across the board our callers averaged about 7 completes per caller, with some variation by race/ethnicity. Overall, 22% of the 695 respondents for whom we have caller data were collected by white callers, 22% by Black callers, 46% by Asian callers, and 10% by Hispanic callers.

So here’s what we have – this is using all our respondents, NOT adjusting for Likely Voters. (Making that adjustment does not make any difference in our basic results.) First the unweighted responses to the question:  “Let’s talk about the Senate election in October. If the special election for the Senate seat were being held today and the candidates were [ROTATE ORDER: Democrat Cory Booker and Republican Steve Lonegan], for whom would you vote?”  (Note, that we did a followup to the don’t knows, asking how they “lean”. We will ignore this right now and focus only on the initial question.)

SNAG-005

Note we have a 22 point margin between Booker and Lonegan in the raw unweighted data, about the same as we had in the final weighted sample. The “Refused” represents people who would not answer the question at all, and the “System” are people who were not asked because they said in an initial screening question they would not be voting.

So what happens if we look at these responses by race of interviewer?

SNAG-003

Now we are only dealing with the 721 people who gave us a response to the question. Note that White interviewers got 50.3% support for Booker. But Black interviewers got 59.5%. Hispanic interviewers found even more Booker support: 62%. Finally, Asian interviewers (the largest group in our call center) found 49.9% support for Booker, pretty much the same as white interviewers.

Next we look at the percentage support for BOOKER by a combination of the Respondent’s race/ethnicity and the caller’s race/ethnicity. This now uses 697 respondents for whom we have their race (a significant number always refuse to answer that question.)

SNAG-002

The raw numbers (Total Column) show that 49.6% of these white respondents supported Booker, while Booker support was 91.1% of Black respondents, 80.0% of Hispanics, and 51.2% of other. Other in this case includes Asian, multiracial, and any other response to the question. These are essentially “normal” results in that we expect Black and Hispanic voters to be more supportive of Booker.

Looking at the Total ROW at the bottom, we see that for White callers, 50.7% of all their respondents supported Booker, with a similar result (50.5%) for Asian callers. But for Black callers, 60.1% of respondents supported Booker, while for Hispanic callers it was 65.2%; both are well above the total 56.3% Booker support among this set of respondents.

More importantly, note that WHITE respondents talking to WHITE callers gave Booker 49.2% support. But when talking to Black or Hispanic callers, white respondents were more likely to report a Booker vote, at 54.9% and 58.0% respectively.  This effect has been documented in the past, including in the Wilder race for VA governor in 1993.

We see another interesting effect with non-white respondents, though we have to be very careful here since we have relatively few of them, so any one group could be highly skewed. But in general, non-white respondents who talked to white callers, were less likely to report Booker votes than when they talked to non-white callers.

All of this is interesting but it doesn’t account for the possibility that callers of different races/ethnicities may have talked to different kinds of respondents. As a simple example, if white callers were more likely to talk to Republicans (regardless of respondent race), while non-white callers talked more to Democrats, we would see the same pattern but it would not be because of the race/ethnicity of the caller. To deal with this we must do a more complex multivariate analysis to control for these kinds of differences.

We won’t go into the details of the statistical analysis here, but it was designed to control for key factors that affect the vote choice – partisanship, ideology, and voter race/ethnicity, and voter gender. That means that we make sure the differences we see in the vote by caller race/ethnicity are NOT because of these factors. We added in one more control, that for what is termed in political science as “Racial Resentment” (see also here), a measure of “subtle anti-Black feeling”. We included this because Booker is African American and research has shown that this measure helps predict the likelihood of voting for a Black candidate.

By using multivariate statistics (specifically logistic regression) to predict the likelihood of a vote for Booker based on the controls above AND the race/ethnicity of our callers, we can examine the extent to which we see caller race/ethnicity conditioning poll responses. Follow is what we find:

SNAG-001

The first row of data shows all respondents by the race of the interviewer. Results are very similar to the initial table before we control for other factors. Across everyone, voters who talked to Black and Hispanic callers were more likely to say they would vote for Booker than those who talked to white and Asian callers.

As the table shows, there are differences across the race/ethnicity of respondents. Looking only at white voters, they remain more likely to tell Black and Hispanic callers they support Booker. For Black and Hispanic voters, talking to a white caller seems to lower the likelihood of reporting support for Booker, compared to talking to non-white callers. And because the model used for this prediction controls for partisanship and other factors, we are pretty confident that the results are in fact related to the race and ethnicity of callers and the race/ethnicity of voters.

To check this, we also ran similar models with the Buono-Christie responses from the same poll (where our results were in line with everyone else’s in mid-October) which show no effects for race/ethnicity of interviewer. Even more interesting, we also tested this model with the evaluation we asked voters to give to Booker (called a “feeling thermometer rating”) on a 0-100 scale, and we found no significant effects for race/ethnicity of callers. The issue seems limited to the question of the vote itself, and not other questions.

So what does this all mean?

For the Rutgers-Eagleton Poll, it means that our pre-election numbers which overstated Booker support were, at least in part, because we have a very diverse call center, probably much more diverse than any other call center that polled on this election. It also means we will have to look more carefully at how we handle election polling when there is a non-white candidate in the mix.

And it also means that in an election like this, with an African-American candidate, polling that does not use interviewers – like computerized polls where respondents listed to a computer ask the question and respond on their phone keypads, known as “interactive voice response” – may result in more accurate results, at least for those who can be reached this way. However, IVR cannot be used to call cell phones, so at a minimum it would be necessary to combined IVR with live calling of cell phones in order to get a reasonable sample of the population. This is what Monmouth did in its pre-election polls, apparently to good effect. IVR has other issues, though, and has to be looked at very carefully.

If you’ve made it this far in this very long post, congratulations! Bottom line for us: our final pre-election Booker-Lonegan poll was off by 10 points, overstating Booker’s numbers. We now think a least some significant part of that error is due to this race/ethnicity of interviewer effect as the evidence shows.

Of course, this does NOT explain our problem in the final Christie-Buono poll, where we were off by 14 points (showing Christie up 36 points while he won by 22.) Given the evidence from the October poll where our numbers for the governor’s race fit with other polling centers results, something else must have happened in our final gubernatorial poll.  Apparently we suffered from one problem in the Senate race, but something else in the race for governor. We’re currently moving forward on trying to understand what that might have been. We’ll report more on that effort in the (we hope) not-too-distant future.

2 Comments

Filed under 2013 NJ Election, Cory Booker, NJ Voters

2 responses to “Following up on our Booker – Lonegan Numbers from October

  1. Fran H

    I think you may be missing the Occam’s razor answer to both these issues. Since the polling was so lopsided in both races, there didn’t seem to be any chance that either Booker or Christie would lose. I think in both cases the actual vote may have been influenced by “I don’t want this guy to get TOO much of landslide.”

  2. Fran – that could well be a factor also, but it would not explain why our poll was further off than others. Presumably we’d all see an equivalent effect if your suggestion is right. But whether it is or not, the fact remains that we DO see differences by the race/ethnicity of our callers, in a direction that would increase support for Booker in our poll.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s