Facebook Google Menu Linkedin lock Pinterest Search Twitter

Advertisement

simulation

Feb 9, 2012

User login status :

Share

As we all get reacquainted with the madness that is the first week of the new political season, the time is ripe to do a bit of a comprehensive rundown about the actual state of play of  our political polling. We’ll start off looking at the trends and finish with an election simulation for the December quarter.

First up, the two party preferred trend might surprise a few folks that take their media polling commentary too seriously – it reminds me of a line from Chicken Run, “the polling flashed before my eyes, and it was really boring”.

Over the 3 months from mid November, nothing has changed at all in the two party preferred status – zip, zilch, nadda. Federal politics has been glued to a 54/46 split for nearly 90 days straight.

The primary votes however are a little more interesting, with some compositional change occurring underneath that rather dull looking straight line.

 

 

While the Labor primary continues to recover from its July tanking – albeit at a pace not dissimilar to continental drift over the last few months – Coalition primary support fell to 46% at the end of last year, before bouncing back slightly post Christmas. It’s interesting to ponder whether that is an effective Coalition vote floor under the prevailing dynamics.

Meanwhile, the Greens continued their year long voyage of exploring political life between 11 and 12% public support and the broad “Others”  – apart from pollsters having considerable variation in their vote estimates for this rag tag group – appeared to show a continuation of the slow fade that started after their June 2011 highs.

As of last weekend, the actual point estimates of the trends and the changes from the last election look like this:

The government has a 5.4% swing away from them on the primary vote, washing out to a slightly smaller 4.3% swing away from them on the two party preferred. The Coalition has picked up 3.1 points on their primary while the Greens have lost 0.4 points. The broad “Others” have picked up 2.7 since the 2010 election.

Moving on now to December quarter’s election simulation. For those not familiar with it, we grab three months worth of polling from the major pollsters and a few bits of unpublished stuff (usually giving us a pooled sample of around the 13 to 15 thousand mark), break it down by geography (by state first and foremost, but also by region when possible), turn the derived swings from those polling results into probability distributions for each seat (taking account of their sub-state geography)  –then test those swings against the current seat margins about a million times with a monte carlo simulation and aggregate the results. We end up with a simulated election that shows us how many seats would have changed were an election held during that period and the results of the election closely resembled the polling.

First up, the state based swings the government found themselves facing in the December quarter:

The Coalition was experiencing a two party preferred swing towards them in all states and territories, and in both capital cities and regional areas. Seeing how that plays out in seat terms with our simulation, we get:

During the December quarter, the polling had the government facing an election outcome of 53 seats in the 150 seat House of Representatives. Zooming in to the tasty bit:

The ALP had a 65% implied probability of winning at least 52 seats, dropping down to 53.1% for winning 53 seats (the most likely outcome) before dropping further to a 42% implied probability of winning at least 54 seats.

While that is pretty dismal for the government by just about any yardstick – it was actually a significant improvement over the September quarter results. It’s worth comparing the two (you’ll have to click to expand this one):

In the September quarter simulation, the government was looking at only 43 seats, while the December quarter showed a 10 seat improvement across the probability spectrum for Labor. We can see where the improvement came from by looking at how the State breakdowns changed over the period.

While the two party preferred only increased by 2.1% nationally for the ALP, they made 3.1% gains in Qld, 3.7% gains in regional Australia and 5.2% gains in South Australia. On the other hand, Victoria was flat and the capital cities only moved by 1.2% over the September quarter to December quarter period.

Worth mentioning is that the ALP is currently sitting 1.1% higher on the two party preferred than they experienced in the December quarter (currently 45.8% as opposed to the 44.7% achieved over the October to December period) – mostly because of the relatively poor October results. So the current trend polling would have them somewhere around 5 or 6 seats better off at the moment than they were during the final 3 months of last year.

But the story at the moment is not so much the relatively poor state of Labor’s electoral prospects based on current polling, but the fact that the two party preferred trend line has been unmoved for around 90 days. I can’t find another example of 90 days of nothing in federal polling going back to the mid 1980’s, even over a Christmas break. Usually you get some movement – a bit here, a bit there. This one – flat as a tack.

 

Possum Comitatus — Editor of Pollytics

Possum Comitatus

Editor of Pollytics

Political Commentator and Blogger

Get a free trial to post comments
More from Possum Comitatus

Advertisement

We recommend

From around the web

Powered by Taboola

49 comments

Leave a comment

49 thoughts on “The 2012 State of Play

  1. Ray Polglaze

    Hi Possum,

    There seems to be a significant difference in the trend analysis between your Pollytrend Two Party Preferred graph and Andrew Catsaras’ Poll of the Polls Two Party Preferred % graph on ABC Insiders on Sunday.

    See the Catsaras graph at 1:06 on the video at this link:

    http://www.abc.net.au/insiders/content/2012/s3439649.htm

    While you have the two party preferred trends flatlining over the summer months, Catsaras has continuing trends of increasing support for Labor and decreasing support for the Coalition.

    My understanding is that you are both involved in a process of averaging and weighting all the available opinions polls. I would have expected this similar analysis of the available polls to generate similar trends.

    Do you have any thoughts on why there is this significant difference in the apparent trends on the graphs?

    Thanks,

    Ray Polglaze

  2. dany le roux

    John64

    I have only once been polled and this was by phone just before the last election. The questions I was asked were all about my voting intentions at that next federal election. Is it meaningless from the point of view of discovering sentiment to ask about ones voting intentions for the next election when it is two years out? I understand that all polling questions in these circumstances would be prefaced by “if a federal election were held next weekend” or similar .
    I wonder what difference this could make to a mid term polling result if you asked the punters their voting intentions ” at the next federal election”? as opposed to “if an election were held etc”?
    I imagine voters would expect themselves to have reached a considered judgment in two years time rather different from their contemporary view which may have them holding baseball bats or on the other hand being totally oblivious to the current issues whatever they be.
    Perhaps Morgan could experiment with this ( “what are your voting intentions at the next federal election”) since his polling is all over the place anyway.

    Peter
    I grew up with the DLP handing Ming government for 23 years using DLP preferences where the DLP vote was inspired by ex cathedra pronouncements and where the Coalition often relied on DLP senators for a conservative Senate majority. Preferences certainly dictated outcomes in those days and defined the DLP raison d’etre because they could never get enough votes to have even one HoR member. You have a point.

  3. John64

    @Peter: Is it really so hard to grasp that time changes things? Possum’s analysis is simple: If an election /were/ held last quarter, we can be pretty sure the Liberals would have romped home and Labor would have been thrown out the door. No ifs, no buts.

    Polls are simply smaller samplings of an electorate at any given time. Think of them as mini-elections.

    Only of course, we don’t hold elections every fortnight, do we? We only have them once every three years or so. Well, we’ve got a whole 18 months before the next one. In that time: Labor could change leaders, Tony Abbott could implode, aliens could invade. Anything can happen. Those events that do happen will affect how people judge the Government and Opposition at the time and then affect how they vote /at that time/.

    Votes change as a result.

    People change their minds as circumstances change. If they didn’t, we would’ve had one election in 1901 and nothing would have changed since. But people have died, countries have gone to war and the world has changed since that election.

    The issue you seem to be struggling with is what this tells us. Given an election /wasn’t/ held last quarter, the information is – as you correctly surmise – completely meaningless on that point. No election was held, therefore a poll about a theoretical election is meaningless for that purpose. But that’s /not/ the purpose of polls or of what Possum is doing here.

    What it tells us is what the people think of the Government at this point in time. And they think it stinks. The question is: *What is the Government going to do about that?* If they do /nothing/ (IE: nothing changes) until the next election, then we can surmise what the outcome will be. But let’s assume they see the polling and as thinking human beings, they respond to it. They make /changes/. Well, then depending on the changes they make, the polls will either improve for them… or given Gillard’s track record of late, get even worse.

    So, what this polling tells us is that we have a shitty Government that the majority of Australians don’t like – and unless someone does something about that, we’ll be changing Governments at the next election.

  4. Peter Ormonde

    Possum,

    Thanks for the response. I’m still pretty confused myself.

    So when these pollsters … the data on which these analyses are based … come out and state – “if an election were held yesterday the government would have been”…
    Jings that seems a close shave off predictive to me. It is certainly how they are interpreted by the media and the public. And by really dumb politicians.

    So of they are not predictive, they are descriptive yes?

    I sent you a link to a BIG POLL… a poll of 22,000 across well selected sample for days out from the 2010 ballot. I see what you mean about being unpredictive. Barely even descriptive actually. I wonder what happened between Wednesday and Saturday? Something huge.

    Yet this is your data. And those wildly varying results between polls – from slightly varying methodologies … how do you massage those together? I can’t see the Essential methodology for example sitting easily with any of the major newspaper polls.

    Questions questions questions… Send me a link to something that’ll explain exactly what you’re doing. Something I can understand. Strip the jargon out of it for me.

    Caf:

    What’s the data set we’re using here … trends in time… since the 1970’s? Since 1996? What timeline do you use to base the assertion that ” at almost all Federal elections (the 2010 election was a different kettle of fish due to being so close), preferencing decisions don’t affect the outcome a jot.” Makes one wonder why we bother with it doesn’t it???

    See I’m a bit old-school. I take my cock and bull stories from the likes of Anthony Green. Here he is here, talking about preference allocations…http://www.abc.net.au/elections/federal/2004/guide/minorprefs.htm … bit old – 2004 – but heck we’re talking trends and that means we’re talking timeframes. Gimme one. I hate generalities. Let’s walk through a few ballots together.

    Bob Katter always matters. It’s the hat.

  5. Possum Comitatus

    Peter – there’s one thing that really needs to be stated here.

    Polling is not predictive – it’s descriptive of opinion at a moment in time.

    So of course, back in January 2010, 72 seats never poped up – because we weren’t answering the question “what will happen at the election?”.

    The question we were answering was “what is the state of public opinion at this moment in time” – no more, no less.

    The reason I chose monte carlo style simulation for this type of analysis (rather than, say, regression work) is that (a) polls are actually probability distributions (something most folks don’t really appreciate), and (b) monte carlo analysis as a methodology is designed specifically to deal with uncertainty.

    The two are a perfect marriage.

    So yes, there are uncertainties involved – preference allocations, differing regional swings that may not be picked up with state and national level polling etc.

    That is actually part of the very reason I use this, rather than a dozen other available methodologies, to estimate the state of play from polling – because it’s designed for dealing with uncertainty better than any other method.

    But the key thing to remember is that polling isn’t predictive, it’s descriptive, of public opinion at a moment in time.

    As for the final 2010 simulation done on election day. The final polls for the campaign were (ALP two party preferred) Nielsen 52, Galaxy 52, Morgan phone poll 51.5, Essential Report 51, Newspoll 50.2.

    Combined, those polls suggested a relatively easy Labor victory.

    However, aggregating the dynamics underneath the headline figures for those polls and running them through our election simulation engine allowed us to say that it was going to be very close with the very very real possibility of a hung parliament.

    Which is what happened.

    Don’t know how anyone can call that wrong – let alone hopelessly.

    That’s an example of how this works.

  6. Peter Ormonde

    Poss…

    Thanks for the link … clever ideas … interesting methodology… love the graphs.

    But mate… for all that mathematical delicacy, it was woefully wrong!

    In January 2010 you montecarloed a set of more or less random election results based on polling numbers and a few key variables. “In those 20,000 simulated elections, the worst result for the government was 92 seats while the best was 121 seats. The median and mode results were 107 seats while the mean result was 107.2 seats.”
    72 didn’t even get a look in…. not even possible…. not even on the graph let alone under the curve. Who would have guessed?

    My only exposure to this sort of statistical analysis is from biology and ecology where it is used to explore possible outcomes from changing a complex range of variables… like how a forest might respond to a hot dry summer, or a change in the bushfire regime. Montecarlo analysis pumps out a squillion possible answers (based on parameters you set) and then bunches ’em up into probabilities. It tells you that you’re more likely to get something that looks like this set of outcomes and it plucks out a statistical probability curve. Bit like using a shotgun to kill a mosquito.

    Now I’m not sure how that tool (as I understand it) is actually much use where the outcomes are rather limited ( say, a finite number of seats or a winner/loser), where the data is at best patchy (all these pollsters use different “methodologies” – I’m being kind here), where the actual complexity of the outcome (ie preference intentions) is ignored or inferred or generalised, where the variables change in ways we cannot anticipate (like having Julia instead of Kevin… who would have guessed?) Unguessable variables. Unguessable outcomes.

    Now if everyone was doing something different … if there were a squillion possible outcomes …. then it would make more sense to me.

    I can see what you are trying to get at. But I suspect that the quality of the original data doesn’t match up with the precision of your technique. I also suspect that in selecting variables we are constrained by our imaginations and perhaps influenced by our inherent enthusiasms.

    But once again the final result – the actual outcome of the election – comes back to the question of the preference distribution and where this preference distribution happens. This is not a random process. But it is wildly individual ( who’s running) and anything but uniform – not even across states let alone nationally.

    And the outcomes from that set of decisions is wildly different. You get a Bob Katter or a Tony Windsor … chalk and cheese and potentially a different outcome in terms of forming government. Variables everywhere … a political outcome.

    I reckon montecarlo systems are applicable to answering – or more precisely suggesting likely outcomes – for very simple questions. Will there be more or less growth in your forest?… A or B stuff. This is not what elections is. Thank heavens, eh? Otherwise it would look like the Pyongyang Politbureau having a vote. Talk about holding up showcards.

    Increasingly, elections are assemblages of smaller complex results. It would be more useful to look at ways of tunneling into any decent data to provide very specific rather than generalised answers — a pendulum that only swings left to right.

    To be honest I find that the assumption built into this method – with its inherent two party structure – is perhaps more suited to the USA than here or say Britain, let alone the 74 horse races you get in Europe. Would probably be OK in Beijing.

    I reckon this needs lots more thought Possum. Or more explanation.

    I should declare myself as a heretic when it comes to market research and polling. I reckon it’s trivial. I think it trivialises our politics and turns our governments into smile-hungry populists. Politics is not a constant popularity contest. Nor is it a constant campaign. But this is for a discussion elsewhere perhaps, if you’d like.

    But there is one lingering question hanging over the whole exercise: the end result of your 2010 simulation – despite all its sophistication – was wrong. Hopelessly wrong. Why do you think? What went wrong?

  7. Possum Comitatus

    Peter – this explains it a bit more http://blogs.crikey.com.au/pollytics/2010/01/04/new-year-election-simulation/

    We basically modify a standard electoral pendulum, turn it into state based pendulums, adjust for quasi event-dependency – then add other information we know we know ( like capital city vs regional breakdowns – sometimes by state, or clusters of seats moving together if polling shows that…. such as Western Sydney) to the probability distributions for each seat that are used to call each seat.

  8. Peter Ormonde

    Now Poss… last time I looked Government in this wide flat land was determined by preferences in a good fistful of marginals, and by independents and by a deal. This influence of preference distributions in key seats is an increasingly obvious trend over the last three decades.

    So how does this sort of market research translate into serious prediction? To what extent is it feasible to draw generalised conclusions about an election outcome without considering preferences… particularly in those seats where it is critical?

    I know there are people out there who like political statistics. I am, sadly, not one of them. It’s not that I was bullied by a standard deviation as a child or anything … and I understand them OK – I just don’t think they’re much use.

    See Poss you run an election simulation – a montecarlo no less – without preferences (?). Do you just ignore this, or do apply some sort of historical or assumed distribution? Either way it makes these deduced outcomes and generalities a bit well dodgy in my book.

    Same for the “geographical” massaging you mention … not down to the level of an electorate then?

    One thing I did find interesting is the spreads in the range of polls suggested by your three graphs of the primary votes. All over the place aren’t they? How strong are those trend lines you reckon? Averages of averages of averages….

    Now I’d reckon a character with your smarts would be curious to do a bit of analysis of the deviations between polls and the overall “predictability” arising from them over time. Who gets it wrong most of the time and why? Who gets it right and why? A methodological review. Might put a lot of people out of work I’d reckon.

  9. John64

    @shepherdmarilyn “Really and truly I fail to understand this obsession with the meaningless opinions of a few hundred people every now and then.”

    Which political party do you intend to vote for? If you think your answer to that question is meaningless, enjoy Syria. Those opinions mean something because they’re voters, expressing their opinion about the Government / the state of Democracy in Australia at the time.

    Polls have a use, provided you know how to use them. Unfortunately too many politicians these days have the IQ of a fish stick and don’t understand what polls mean, nor how to read them. At a base level, if you’re polling poorly – particularly if those polls jump after a certain issue takes the headlines – then it means you’ve failed to communicate your message.

    Presumably Governments do things “for good reasons” and that any sane, sensible, thinking Human being – when presented with the same information – would make a similar decision. If the polling doesn’t agree with you, then two things are possible:

    1) You’re inept and shouldn’t be in power – because you /actually/ made the wrong decision. In future, try consulting more with the relevant industry / representatives / people that issue concerns and do your homework more thoroughly before going off with a knee-jerk reaction.

    2) You’ve failed to make your case – and need to think about how you’re communicating with the people you’re supposed to be representing. IE: Get out there and work harder.

    Far too often people take option 3) Try and find out what the people think they want and then give that to them. Even though if you have access to all the necessary information, it looks like a bad decision (ref: Malaysian non-solution).

  10. fmark

    Firstly, don’t forget the narrowing!

    And secondly, on the accuracy of polling, don’t forget that:

    1. even though voting is compulsory not everyone is enrolled. At 30 June 2011 only 90.9% of eligible voters were enrolled (Source: AEC Annual report, archived at http://www.webcitation.org/65JvjBXNs).
    2. not every enrolled voter votes. In the 2010 Federal Election, only 93.22% of enrolled voted actually cast a vote in lower house (Source: AEC website, archived at http://www.webcitation.org/65JwBX6bG).
    3. not all votes are formal. In 2010, only 94.45% of votes in the lower house were counted (Source: AEC 2011, “Analysis of informal voting” , archived at http://www.webcitation.org/65JwQReCb)
    4. even when all these factors are taken into account, people make mistakes when counting the votes. This depends on the voting system and counting system. In the USA, hand-counting methods are apparently estimated to have an error of 0.5% – 2% depending on the method used (Source: 3rd party reportage of Byrne et al. (in press), “Post-Election Auditing: Effects of Election Procedure and Ballot Type on Manual Counting Accuracy, Efficiency and Auditor Satisfaction and Confidence”, Election Law Review. Reportage archived at http://www.webcitation.org/65JwoQod3). I’m sure this is different in Australia, so I’ll use the lower estimate of 0.5%.

    So if we multiply these factors together we get an estimate that perhaps only 80% of eligible voters make a federal lower house vote in Australia ( 0.909 * 0.9322 * 0.9445 * (100 – 0.005) = 80.03). These figures are probably full of problems to some extent, but are probably close to the mark. How this effects the accuracy of polling is beyond my ken.

Leave a comment