This post was prepared by Ashley Koning, Manager, Rutgers-Eagleton Poll.
Yesterday, we released some new numbers on the upcoming elections for the state Legislature, as well as on the amendment to raise the minimum wage that will also be on the ballot in a little over a month. It just so happens our friends at Monmouth released some numbers on the minimum wage increase, too. Both polls were in the field during similar dates in early September, and both polls show a large majority in support of the minimum wage amendment. Our Rutgers-Eagleton Poll shows support at 76 percent if the election were held today, compared to only 22 percent who would oppose it, and just 2 percent who are unsure. Monmouth, on the other hand, shows a slightly different picture: 65 percent in their poll say they will vote for the minimum wage increase, versus 12 percent who say they are against it, with another 24 percent either unsure or not voting on it at all.
So why is there a difference? Specifically, an 11-point difference in support and a 10-point difference in opposition? Is one set of numbers right, and the other wrong? In order to understand what each of these results means, we need to understand the context – specifically who we are talking about in the analysis and how these questions were asked in the first place. Upon a closer look, these slightly different numbers are telling slightly different stories, which is important to acknowledge when interpreting the results.
A big and immediate difference between the two polls is who each of them is talking about. Monmouth’s release focuses on registered voters, whereas our release features likely voters. While the two subsamples stem from the same voting population in New Jersey, registered voters are simply that – those who say they are registered to vote. But that does not mean that they actually vote. Likely voters, in turn, are a smaller subset of registered voters; they are the ones most likely to vote in an election based on a variety of characteristics that may include past voting behavior, interest and attention to the race, and likelihood of voting in the election. Polls usually analyze likely voter subsets when it gets close to election time, filtering out among registered voters those who are less likely to fulfill the aforementioned traits. Likely voter subsamples are meant to give us a – hopefully – clearer picture of what the people who will actually being casting votes may do come Election Day.
Because not all registered voters are likely voters, numbers between these two groups can certainly vary, and support can grow stronger or weaker based on who fits the likely voter profile for that particular election – hence, the fluctuation we see between Monmouth’s numbers and our own. And while we have usually analyzed these questions using registered voters in the past, we this time talk about a likely voter subsample in our most recent analysis – just like we did in our previous U.S. Senate and gubernatorial election releases the other week – since we are getting closer to Election Day. Those who are most likely to vote in the governor’s race will also be most likely to vote for state Assembly, state Senate, and the minimum wage amendment since they are all on the same ballot, so analyzing just likely voters should give us the best approximation of what will happen come Election Day.
But even in our registered voter subsamples in April and June, our numbers on minimum wage have hovered around 76 and 77 percent. So perhaps the real culprit behind these differing numbers here is question wording. How a question is worded has a significant impact on subsequent opinion towards it, whether that means a difference in the actual words used, a difference in the thoughts that are invoked, or a difference in the choices that are presented.
Here, we have a difference in how respondents are asked to think about it and how they are told they can answer. First, while our question and Monmouth’s question about the minimum wage amendment are very similar, there is a slight difference in how we ask about the respondent’s choice: we ask about it as “if the election were being held today,” while Monmouth asks about what respondents will do in the future come the actual Election Day. The difference is seemingly small but can influence how people respond. Polls are a snapshot of what is happening at that moment in time, so to ask about what people will do in the future on Election Day may change between now and then and can lead to greater uncertainty due to the variability in future plans. Asking the question instead as if the respondent has to cast a vote right at that moment gives the decision more immediacy and captures what the respondent is feeling right then and there. In fact, most election questions are typically asked in the way in which we word this one; the “if the election were held today …” prompt has been a tried and true staple of polling. But there is not necessarily a right or wrong way of asking this– just a need to acknowledge that the slight difference in asking the voter how to think about it can impact their subsequently expressed attitude.
Lastly, the two questions differ in the answer options they give respondents, which in turn can effect the distribution of opinions and how we can interpret them. Monmouth asks, “[…] Will you vote for or against this, or are you not sure?” We, on the other hand, ask, “[…] Would you support or oppose this constitutional amendment?” Monmouth here explicitly provides the option of expressing uncertainty to the respondent, giving them an alternative choice beyond siding for or against the amendment. We instead present respondents with a “forced choice,” which is what occurs in the voting booth, asking them to choose one of two ways in which they can vote on the amendment. While respondents can certainly say they “don’t know” or will not vote in our question – and we note this in our results as well – we do not emphasize the opportunity to express uncertainty, and therefore, this answer may not be at the forefront of respondents’ minds when answering.
In addition, and this relates back to the “who” component mentioned above, likely voters – aka, the subset of registered voters that we analyze in our release – tend to be more certain in their expressed views than registered voters. This could be the reason behind why only 2 percent of them express uncertainty about their stance on the amendment. Likely voters are more likely to vote, more engaged in the political process, and thus, their opinions – especially about what they will eventually be voting on – are presumably more solidified. While our “don’t know” responses were still in the single-digits with registered voters in past months as well, the number of those saying they are unsure is cut in half in these latest results among likely voters.
So what’s the moral of the story here? Poll results can vary for a whole variety of reasons, and differing results do not automatically mean that one poll is good, and another poll is bad. Differing results should instead teach us to explore further into why they differ and what each poll is trying to tell us. In this case, while each of these polls tells a slightly different story, reading either one of these results leads to the conclusion that there is a very good chance voters will pass the minimum wage increase come Election Day. But it is only until then when we will find out the actual numbers of how many truly support or oppose the amendment.
As we were preparing to post this today we saw the new report from Quinnipiac that Booker is only 12 points ahead of Lonegan in the U.S. Senate race. Of course, we reported two weeks ago that Booker held a 35 point lead. More interesting, Stockton State University, which does occasional statewide polling, reported a 26-point Booker lead yesterday. All three polls are of “likely voters” so that’s not the difference (except we may all be defining likely voters differently.) Is it plausible Booker lost 2/3 of his lead in two weeks? Or dropped 14 points “overnight”? No, of course not. While we have not yet had time to examine this in detail, we suspect there are a number of potential reasons for the differences.
We might be defining likely voters differently. Rutgers-Eagleton may have caught Booker at a particularly positive moment, since we polled just as negative news was focusing on Lonegan. By most standards Lonegan’s PR over the last two weeks has been much more positive, which could improve his numbers. Quinnipiac could be off just as much as we might be – that is, they could be too low, and we could be too high. It could also be that we have different samples, with different numbers of Democrats and Republicans, which can move numbers a few points one way or another. But the dramatic difference between Quinnipiac and Rutgers-Eagleton is hard to explain. Perhaps Stockton, sitting kind of in the middle, is closer to “reality.” But this is also hard to know, because so far we can’t find any methodology statement of details from Stockton like we can for Quinnipiac, Monmouth, and like we do here at Rutgers-Eagleton. We’ll look further into this, and of course, I’m pretty sure we will all poll again before the October 16 election day.
In any case, it’s interesting, and perhaps instructive. No one should ever get all excited about any ONE poll, even if it is ours. A poll is a snapshot in time subject to a range of errors that we do our best to avoid. But frankly, if you want to know what’s going on, follow the averages (like at realclearpolitics.com) rather than any one poll.