Comparing Australian pollsters and their relative partisan leansREAD MORE
Being the end of June, it’s time to crank up the stats and run our quarterly election simulations based on polling aggregates of the last 3 months from all the pollsters that provide state level breakdowns. However, before we do, it’s worth updating our Pollytrend measures as the month of June turned out to be quite the mover and shaker – delivering the ALP its largest voter alienation in the history of the Rudd/Gillard government.
First up, the primary vote trends for the majors (the Greens are pretty flat – you can see them in the sidebar on the right)
While the ALP primary didn’t quite fall off a cliff, it certainly stumbled without grace or poise down a rather steep hill. The June period saw the Gillard government drop 4 points of primary vote – from 33% down to 29% (give or take a couple of tenths of a percent) in around 30 days. This flowed through into the two party preferred estimates as a slightly smaller loss of 3%. The two party preferred chart is starting to look a little horrific for the Labor side.
Again we see the same pattern emerge where the ALP vote flattens off at some level for a while before taking a hit and flattening off at a new, lower level of support – rinse and repeat.
The simulations don’t look much better – especially considering that a full two thirds of the sample period we’re using here – April through June – had the government in a much better position than they face right now. First up, the broad results where we look at the probability of the ALP winning at least X number of seats.
The most likely result were an election held over the last three months and where the outcomes of the election matched the polling, would have had the ALP winning 53 seats. To give an idea of how far the government has slipped over the last 3 months, it’s worth comparing this election simulation to the last set we ran for the first quarter of 2011. If we look at the same cumulative probability distribution above with last quarter’s, we get:READ MORE
Plugging all the new polling data of late into our trend system, we find that the medium term deterioration in the Labor government’s two party preferred vote has stabilised out around the 46% mark over the last 6 weeks. A lot of day to day stuff came and went, from budgets to boat people, from [...]READ MORE
The CSIRO last week released what was effectively a statistical analysis of the reality surrounding large parts of the infamous Home Insulation Program – or for those of you not familiar with this particular policy, you may have heard about it via it’s common alternative name in the mainstream media, the “OMG, PETER GARRETT IS BURNING DOWN OUR FUCKING HOUSES!” policy.
As we here have long known and talked about, the reality of the Home Insulation Program was always vastly different to its hysterical media portrayal – driven as it was by naive and innumerate journalists looking for easy sensational headlines, and partisan hacks prostituting their cheap wares before a gullible public. Having a cowardly government lacking the plums to tell them all where to stick it was another unfortunate sub-plot in this tale of public deception about the reality of a substantial piece of public policy.
The CSIRO report covers three large areas – analysis of fire related incidents, broader safety risk issues relating to the insulation program and the development of a risk profiling tool. Today, we’ll just focus on the fire related incidents component, as we’ve long been following this particular issue in depth and it’s nice to be able to bring it to a close, flip the bird at our detractors and exit the battlefield under a big banner saying “We told you so”
The first thing that needs to be done is explain what the CSIRO *didn’t* do. They *didn’t* answer the elephant in the room question: “In the 12 month period after having insulation installed, was there a difference between pre-program and in-program probability of having an insulation related fire incident?”. They provided all the data we need to get an estimate of it, they made a sort of assumption about it, but didn’t actually attempt to tackle that important question head on.
This question is important because it tells us whether the Home Insulation Program was safer or more dangerous in terms of fires than what existed before it over the short term – over the first 12 months of having insulation installed.
The second thing the CSIRO didn’t do was provide long term background fire rates (the number of fires we should expect to see every year from all houses that have had insulation installed for longer than 12 months) that allow us to answer the questions *we’ve* been asking. They’ve provided background rates that answer a lot of different questions, that answer a lot of questions other people may have been asking, but not the ones we have. This is a simple methodology issue which we can easily deal with since the CSIRO kindly provided in the report all the data we need.
To get a feel for what actually happened, it’s worth looking at the strong relationship between when the fire incidents under the program occurred (defined as houses which had the fire services called out to them over what turned out to be an insulation related fire) and when insulations were actually installed. This data comes from Table 5.1 on page 30 of the report (click to expand)
As the number of installations increased during the program, the number of fire incidents increased with it, before those fire incidents trailed off on an 8 to 12 month tail after the installations had ceased. This is important because it shows us straight away that most fires happened relatively quickly after insulation was installed. To really highlight this relationship further, if we change this data from a chronological month by month representation into one where we measure how many days insulation had been installed, for every fire incident under the program, this is what we get:READ MORE
With the Newspoll quarterly release this week, we now have enough data to aggregate the pollsters together, break the aggregated sample down into state based components and run our first election simulation for 2011.
Before we start though, it’s worth running though an updated Pollytrend to show how the polls have moved over the January to March period.
As we can see, things were pretty calm over the first 6 to 7 weeks of the year in terms of the two party preferred national results. Over the last 5 weeks however, a fairly large chunk of us went a bit berko over the carbon tax, boat people and other topical favourites of the hyperventilating classes.
What this means in practical terms is that the Labor party are today sitting a little above their aggregate support level of the last 3 months that we use for the simulations – by about half a percentage point of two party preferred thereabouts. Not a massive difference, but worth noting since someone would have in comments anyway.
At the state level breakdown, this is what the last 3 months of aggregated polling suggested in terms of the swings operating since the last election.
The government has boosted its stocks in WA, but have lost ground to the Coalition in NSW, Vic and SA – where most of that lost ground is occurring in the capital cities.
As always, we’re running our quasi-dependency Monte Carlo based simulation method which treats individual seat results as neither dependent nor independent events, imitating the real world effects we see operating in elections where seats “move together” at the state level of aggregation. After 20,000 iterations, the simulation stabilised giving us the following results in terms of the number of seats the ALP would have been likely to win (click to expand).READ MORE
Time for a polling trends update.
First up, the broad sweep. Plugging all the latest polling data fit to print into our system, this is how the two party preferred Pollytrend has changed.
We see a relatively sharp downturn for the government starting in late February, bottoming out around the second week of March, before recovering even more sharply over the last few weeks to end up within half a point of where they were before this rollercoaster began. Beware noisy commentators declaring that Event X was responsible – this upturn has been in the making for two weeks now.
In fact, here’s an image I posted a while back on Twitter that traced the polling up until mid March – you can see what looks to be the turning point happening at the time. We’ll run it next to the current chart to highlight the change:
BTW – you can follow me on Twitter here if you’re silly enough. I publish a fair bit of politics stuff through Twitter that isn’t large enough to turn into articles.
No one particular poll was responsible for the trends, as all the pollsters we use (Newspoll, Nielsen, Essential Report, Morgan Phone Poll and Galaxy when they publish) have generally moved together over the last 6 weeks or so.
If we take a deeper squiz at just the recovery period, Abbott peaked the vote for the Coalition between theREAD MORE
Over the next 12 months, we’ll have more polls on pricing carbon than we can poke a stick at – some more valuable than others – so it’s probably worth taking a squiz at where public views of carbon pricing sit at the starting gates of what will probably be a bit of a rollercoaster that most of the country will get sick to death of before it ever gets implemented.
The last Nielsen poll taken back on the 12th February is as good a place as any to start. They asked the basic question:
Do you support or oppose the introduction of a price on Carbon?
Breaking the answers down by age, region, gender and voting intentions we get:READ MORE