Results are mostly in and Gov. Chris Christie appears to have won by about 22 points or so. That’s a huge win, of course, especially for a Republican in a “blue” state. But it is somewhat below Quinnipiac’s final 28 point lead and our whopping 36 point lead. Monmouth again was closest.
While we’re of course not happy about being so far off, we’re also sitting here scratching our heads a bit. When we look at our likely voters screens as well as our raw numbers in terms of registered voters, it’s hard to see what went wrong. It is the case we did not have enough Democrats, if the CNN exit polls as of right now (10:45pm on Tuesday) are right, about 40% of voters were Democrats, and we had it at 36%. But the more direct “cause” is that 38% of Democrats we talked to told us they were voting for Christie, as did 70% of independents. CNN says 32% of Democrats went for Christie, and 66% of Independents. Combined these folks make up 72% of the electorate, and we overstated Christie support in both groups.
The fact is, however, this is what our data show. And at a basic level our sample is well representative of the state both in terms of demographics, as well distribution throughout the state, and we talk to people on both landlines and cell phones. So the skew isn’t coming there in any serious way, and the skew is not an obvious partisan one since we overstated both a Democrat (Booker) and a Republican (Christie). And more interesting, perhaps, is that many of our other numbers fit with what we and other polls have consistently found in terms of Christie approval ratings and other measures that can be compared. And in October, using the same methodology, we came up with numbers (Christie +26) that were basically where Monmouth was (+24) and well below Quinnipiac (+33).
So we have to look at the possibility that something about our live caller operation is creating a “winner” effect at the very end – overstating results for the leading candidate through the interaction of our callers and the respondents in some way unique to our operation. We have begun to analyze what happened with our overstating of Booker’s win, and that may be something different – we are seeing some serious possibility of a “race of interviewer” effect where our white interviewers were far less likely to be told the respondent was voting for Booker than were our non-white interviewers. (We will have more on this when we complete the analysis and we will report the details here.) But that isn’t the issue here at all.
We also have to look at the nature of our questionnaires. Given that we can only poll a limited number of times in any given year, and we have a lot of questions we like to ask in any given poll. Maybe adding other questions to a pre-election poll (that is questions beside the basic voting stuff) is a potential problem. In this case, we stuck with our usual mode of asking our battery of favorability and approval ratings before we went on to the voter turnout questions and the actual vote. It is certainly possible that we skewed things toward Christie by first asking a fairly detailed battery of favorability and approval questions, most of which were about Christie (only one – favorability) was about Buono.
There are no doubt other things we will look at as well as we try to improve our operations.
In any case, we suppose there is small comfort in getting the winners right, but it is very small. At least it isn’t 1993 when the then Star-Ledger Eagleton Poll not only consistently gave Jim Florio clear leads, but in the final poll that year put him up 9 points. Of course, Christie Whitman eked out that win, making the poll about 10+ points off, and picking the wrong winner to boot. Our error is in the same ballpark 20 years later, but thanks to Christie’s overwhelming win, we at least didn’t get the winner wrong!