Facebook Google Menu Linkedin lock Pinterest Search Twitter



Nov 8, 2011

User login status :


One of the questions that often gets asked is whether a given pollster generally delivers a higher vote estimate for a party than other pollsters – basically, whether a polling firm such as, say, Newspoll (to choose a random polling organisation), leans towards one party or the other.

We can never really tell if any pollster delivers results that are actually higher or lower for a party than other pollsters, because we just don’t have elections every week to determine the true state of public opinion with which to judge them against. However, we can look at relative leans – how pollsters lean for or against a party on the vote estimates compared to what other pollsters are doing at the same time.

It doesn’t tell us who is more accurate – and that’s an important factoid to keep in your thought orbit here – but rather, it tells us how pollsters behave comparatively to each other.

To get us into the groove – and something you may not have seen in a while – this is how the primary vote estimates and the two party preferred vote estimates look like since September 2010  for the four public pollsters we regularly track. Click to expand each chart




These charts are interesting enough – you can sort of see the way some polling firms look like they produce results often more favourable to one party than the other. However, to really examine any relative lean, we need to go a little deeper.

The first thing we need to do is have a yardstick from which to compare the pollsters against. Thankfully, we already have a perfect tool for this – our Pollytrend estimates. Just to refresh, our Pollytrend estimates are based on an aggregation of the most recent poll from all pollsters we track, weighted by both sample size and time. So the older a poll is, the less weight it has in our trend and similarly, the smaller the sample size, the less weight it has in our trend. As a new poll gets released by a pollster, that new poll replaces the previous poll of that pollster in the algorithm. As far as I can see, there isn’t a more theoretically accurate estimate of the true state of political public opinion in Australia than our Pollytrend series – which makes it kind of handy for what we want to do.

The other thing we need to be mindful of here is to only compare temporal like-with-like in the polling results. Not all pollsters produce the same amount of polls, so we have to take that into consideration. Essential Report comes out every week, Newspoll once a fortnight, Nielsen once a month and Morgan’s Phone Poll (we don’t use their face to face results here) gets produced whenever it gets produced.

To control for the different quantities of polls for each of the pollsters, we’ll compare their poll results to the Pollytrend result that occurred on the last date that a given poll was in the field. So each pollster gets each of their polls compared to the Pollytrend result that existed at the time each poll was undertaken. Rather than do it for the primary votes and the two party preferred, we’ll just use the two party preferred results – and we’ll use ALP two party preferred results as our reference.

Once we separate the pollsters and look at how their results compared with the Pollytrend results occurring at the same time, this is what it all looks like. Just click on each chart to expand.





You can start to get a feel for the way each pollster leans relative to what the pollsters were saying collectively. To make it more interesting,  we can take the difference between each pollster’s ALP two party preferred result and the equivalent Pollytrend – again, click to expand the charts




Taking the Essential Media chart– the producer of Essential Report – to use as an example, what we see is that after March this year, Essential Report consistently produced ALP Two Party Preferred results that were a point or two higher than our Pollytrend. At the other end, Nielsen up until July this year produced ALP two party preferred results that were consistently a few points lower for Labor than what our trend measures were showing at the time.

If we average these differences out, we find that two pollsters lean towards Labor (in that they produce results usually more favourable for Labor compared to Pollytrend) and two lean away from Labor (producing results generally more favourable for the Coalition)…. which makes sense considering.

All our pollsters here have relative leans under 1%, so it’s hardly earthshaking stuff –  and it certainly isn’t “bias” in any respectable sense of the word. Rather, Nielsen and Morgan tend to be more favourable to the Coalition by a small margin while EMC and Newspoll tend to be more favourable to Labor by a small margin – at least compared to what the aggregated results of all the pollsters together were saying at any given time.

So yes – our pollsters do lean relative to each other, but not by much, and at varying levels of consistency.

Possum Comitatus — Editor of Pollytics

Possum Comitatus

Editor of Pollytics

Political Commentator and Blogger

Get a free trial to post comments
More from Possum Comitatus


We recommend

From around the web

Powered by Taboola


Leave a comment

14 thoughts on “How Australian Pollsters lean

  1. Possum, perhaps there is something in the “evening-out” hypothesis. Back all those years ago when Rudd was enjoying astronomical leads on voting intentions, it seemed nothing could bring him down. Then 2010 happened. The sky-high ratings couldn’t last forever. Like an elastic band, the public mood swung the other way. In fact, since then, it’s somewhat mirrored the previous 3 years. I would predict it can’t last that long (particularly since there’s no real change going on in the country, nor is the government 15 years old like a certain unit on the east coast was last year). The voting public can’t go on loving or hating an elected official for this long without any credible reason. The starry eyes toward Rudd wore off, so will the disgust at everything Gillard says and does.

    Oh, and Interest Rates…

  2. dedalus,

    The problem with an hypothesis like that is that just because something happened that way in the past doesnt make it deterministic of the future. In this case, the hypothesis would run that the general public through some unknown phenomena is forced to put a party behind on the TPP over any 3 year period – which then begs the question of “that’s a pretty powerful and mysterious force – what is it?”

    The answer of which probably starts getting into the territory of religion and the metaphysical pretty quickly 😛

    With things like this, sometimes patterns just happen because given enough time, they’re some patterns are bound to. So it’s just descriptive rather predictive or deterministic.

    Peace Piece,

    I actually plan to answer those questions and more in another post later this month. How polling works, how it’s weighted, the mechanics involved – as well as things like your probability of ever being polled.


    You’re right – there’s no deliberate bias. That’s a sure way of destroying a company.

    What we have here is just structural stuff that goes on with all pollsters. Are mobile only phone households playing a role in the figures? Is there some sort of non-response bias (the people that get a phone call but refuse to participate in the poll, or the households that are phones but don’t answer) having an impact. Or on Essentials case since it uses an online panel, if there’s some sort of underlying cultural dynamics associated with people that would participate in online panels that delivers them different results from other pollsters.


    The maximum proportion of any Pollytrend observation that any given pollster makes up is Nielsen which can get as high as about 32%, but has also gotten as low as 20%. It depends on what other polls have been done in the window at the time.

    That does, as you suggest, make the differences between each poll and the trend a more conservative number. But not enormously, and certainly not enough for me to lose any sleep over.

    There aren’t actually enough regular sources of quality polling available in Australia to go splitting the combined data set and still ending up with something approaching useful. At best, we could build 4 separate trend measures – one for each pollster where they are excluded and only containing the other 3 – but considering the time it takes to manually build the trends (especially the complex regression work underneath), life is just too short for that sort of thing. Especially when any result would only be a few tenths of a percent different to this anyway.

  3. Hardly surprising that the variation with your data set is so low since these polls are what make up your trends. Its like correction a trait for an element that makes up the trait. Has a massive r-squared but what else would you expect?
    Far more informative would be a comparison of these four individually with sources not in your combined data set.

  4. Cooee Possum, excelllent work and very interesting.
    It’s safe to say an intentional bias would defeat the purpose of polling (unless you were working directly for one side or the other). Any trending bias must be either random or somehow connected to the way the polls are conducted?
    Also, I recall reading here that Margins of error on this type of polling could be as much as 2 percent. That would put these little tinges of red and blue into some perspective, eh?

  5. I think the differences for Labor’s primary vote between Essential and Newspoll is due to the medium of polling: web for the former, phone for the latter.

    Can anyone conjecture whether that contributes?

  6. This is a nice analysis but what about the background information of each poll….there are many variables the pollsters which may influence the poll results to lean toward one party over another such as the nature of the questions used and the demographic of those questioned.
    This is a genuine question,can anyone offer advice on where/if these details are available for every poll undertaken by the above 4 firms?

  7. On a related matter, I’d love you to comment on this hypothesis:

    Based on data from your long-term historical polling graphs, it seems that there has been no case of a party leading in EVERY poll taken over a full 3 years period. Therefore, it’s pretty certain that, AT SOME TIME between now and the due election date in late 2013, the ALP will lead, AT LEAST ONCE, the coalition in the 2-party preferred vote (whether temporarily or not being irrelevent to the point I’m making). This is probably due to the way polls swing up and down because of factors local to the specific polling period, though I admit I’m no expert in these matters.

    So, is it fair to say that the pessimism/optimism that partisan supporters show when reading current polls is misplaced, particularly in the case of polls taken very early in a three year cycle. This hypothesis is presented as solace or reality check to readers of this blog, depending on which way they lean.

  8. Many thanks Poss. From this i would take away that Essential and Newspoll are pretty good running indicators mainly due to frequency of polling, with their being a bit less noise in the Essential as they use a two week rolling sample?

    Sigh.. two years to go. Will be more ups and downs to come.

  9. Worth noting is that comparisons between polling organisations (rather than between one organisation and Pollytrend) should be done by summing the (absolute) “Relative TPP Lean” figures.

    I.e. If Nielsen and/or Morgan print ALP TPPs one point less than Newspoll and/or Essential (maybe even two points, if you’re looking at Nielsen vs. Essential), then they’re reflecting pretty much the same underlying results (according to Pollytrend).

    A similar analysis for primary votes would be interesting, Poss, to expand on Bowe’s rumination 🙂

  10. One of the things that sticks out here is that Essential has Labor at a constantly higher point on the primary vote than the phone pollsters – but since that translates into a similarly consistent lower vote for the Greens, it ends up being nearer the rest of the pack on two-party preferred. It seems to me that it’s Essential that’s landing on the mark – in its eight pre-election state and federal polls since 2007, Newspoll has on average had Labor 1.2 per cent too low and the Greens 1.3 per cent too high. So I tend to think those 27 per cent primary votes we were getting for Labor a few months ago were a bit artificial.

  11. [The one thing I’d add to that is merely because a pollster leans one way doesn’t mean they are wrong, or weighted too heavily – it may be that the other pollsters are the ones that are wrong.]

    Exactly right! We don’t know who is actually accurate, because we don’t have elections every week to judge.

  12. The one thing I’d add to that is merely because a pollster leans one way doesn’t mean they are wrong, or weighted too heavily – it may be that the other pollsters are the ones that are wrong.

    Also – I’m not expert on polling, but I assume that the raw results are very different to the actual published results ie that the numbers are then weighted this way and that to try and mimic the actual voting demographic. This would of course mean that they are real numbers, but based on an extraordinary amount of guesswork.

    Really we should be surprised when they actually get within cooee of right.

    Interesting stuff. Nice work Poss.