Facebook Google Menu Linkedin lock Pinterest Search Twitter

Advertisement

pollytrend

Nov 8, 2011

User login status :

Share

One of the questions that often gets asked is whether a given pollster generally delivers a higher vote estimate for a party than other pollsters – basically, whether a polling firm such as, say, Newspoll (to choose a random polling organisation), leans towards one party or the other.

We can never really tell if any pollster delivers results that are actually higher or lower for a party than other pollsters, because we just don’t have elections every week to determine the true state of public opinion with which to judge them against. However, we can look at relative leans – how pollsters lean for or against a party on the vote estimates compared to what other pollsters are doing at the same time.

It doesn’t tell us who is more accurate – and that’s an important factoid to keep in your thought orbit here – but rather, it tells us how pollsters behave comparatively to each other.

To get us into the groove – and something you may not have seen in a while – this is how the primary vote estimates and the two party preferred vote estimates look like since September 2010  for the four public pollsters we regularly track. Click to expand each chart

 

   

  

These charts are interesting enough – you can sort of see the way some polling firms look like they produce results often more favourable to one party than the other. However, to really examine any relative lean, we need to go a little deeper.

The first thing we need to do is have a yardstick from which to compare the pollsters against. Thankfully, we already have a perfect tool for this – our Pollytrend estimates. Just to refresh, our Pollytrend estimates are based on an aggregation of the most recent poll from all pollsters we track, weighted by both sample size and time. So the older a poll is, the less weight it has in our trend and similarly, the smaller the sample size, the less weight it has in our trend. As a new poll gets released by a pollster, that new poll replaces the previous poll of that pollster in the algorithm. As far as I can see, there isn’t a more theoretically accurate estimate of the true state of political public opinion in Australia than our Pollytrend series – which makes it kind of handy for what we want to do.

The other thing we need to be mindful of here is to only compare temporal like-with-like in the polling results. Not all pollsters produce the same amount of polls, so we have to take that into consideration. Essential Report comes out every week, Newspoll once a fortnight, Nielsen once a month and Morgan’s Phone Poll (we don’t use their face to face results here) gets produced whenever it gets produced.

To control for the different quantities of polls for each of the pollsters, we’ll compare their poll results to the Pollytrend result that occurred on the last date that a given poll was in the field. So each pollster gets each of their polls compared to the Pollytrend result that existed at the time each poll was undertaken. Rather than do it for the primary votes and the two party preferred, we’ll just use the two party preferred results – and we’ll use ALP two party preferred results as our reference.

Once we separate the pollsters and look at how their results compared with the Pollytrend results occurring at the same time, this is what it all looks like. Just click on each chart to expand.

 

   

    

 

You can start to get a feel for the way each pollster leans relative to what the pollsters were saying collectively. To make it more interesting,  we can take the difference between each pollster’s ALP two party preferred result and the equivalent Pollytrend – again, click to expand the charts

   

    

 

Taking the Essential Media chart– the producer of Essential Report – to use as an example, what we see is that after March this year, Essential Report consistently produced ALP Two Party Preferred results that were a point or two higher than our Pollytrend. At the other end, Nielsen up until July this year produced ALP two party preferred results that were consistently a few points lower for Labor than what our trend measures were showing at the time.

If we average these differences out, we find that two pollsters lean towards Labor (in that they produce results usually more favourable for Labor compared to Pollytrend) and two lean away from Labor (producing results generally more favourable for the Coalition)…. which makes sense considering.

All our pollsters here have relative leans under 1%, so it’s hardly earthshaking stuff –  and it certainly isn’t “bias” in any respectable sense of the word. Rather, Nielsen and Morgan tend to be more favourable to the Coalition by a small margin while EMC and Newspoll tend to be more favourable to Labor by a small margin – at least compared to what the aggregated results of all the pollsters together were saying at any given time.

So yes – our pollsters do lean relative to each other, but not by much, and at varying levels of consistency.

Possum Comitatus — Editor of Pollytics

Possum Comitatus

Editor of Pollytics

Political Commentator and Blogger

Get a free trial to post comments
More from Possum Comitatus

Advertisement

We recommend

From around the web

Powered by Taboola

14 comments

Leave a comment

14 thoughts on “How Australian Pollsters lean

  1. Possum Comitatus

    dedalus,

    The problem with an hypothesis like that is that just because something happened that way in the past doesnt make it deterministic of the future. In this case, the hypothesis would run that the general public through some unknown phenomena is forced to put a party behind on the TPP over any 3 year period – which then begs the question of “that’s a pretty powerful and mysterious force – what is it?”

    The answer of which probably starts getting into the territory of religion and the metaphysical pretty quickly 😛

    With things like this, sometimes patterns just happen because given enough time, they’re some patterns are bound to. So it’s just descriptive rather predictive or deterministic.

    Peace Piece,

    I actually plan to answer those questions and more in another post later this month. How polling works, how it’s weighted, the mechanics involved – as well as things like your probability of ever being polled.

    mattsui,

    You’re right – there’s no deliberate bias. That’s a sure way of destroying a company.

    What we have here is just structural stuff that goes on with all pollsters. Are mobile only phone households playing a role in the figures? Is there some sort of non-response bias (the people that get a phone call but refuse to participate in the poll, or the households that are phones but don’t answer) having an impact. Or on Essentials case since it uses an online panel, if there’s some sort of underlying cultural dynamics associated with people that would participate in online panels that delivers them different results from other pollsters.

    Chris,

    The maximum proportion of any Pollytrend observation that any given pollster makes up is Nielsen which can get as high as about 32%, but has also gotten as low as 20%. It depends on what other polls have been done in the window at the time.

    That does, as you suggest, make the differences between each poll and the trend a more conservative number. But not enormously, and certainly not enough for me to lose any sleep over.

    There aren’t actually enough regular sources of quality polling available in Australia to go splitting the combined data set and still ending up with something approaching useful. At best, we could build 4 separate trend measures – one for each pollster where they are excluded and only containing the other 3 – but considering the time it takes to manually build the trends (especially the complex regression work underneath), life is just too short for that sort of thing. Especially when any result would only be a few tenths of a percent different to this anyway.

Leave a comment