Today we look at latest polling trends, the timing issues of the ALP carbon tax recovery and big analysis of how most of Australia’s polling and political commentary is based on random numbers.
The new polling trends come in with the ALP recovery in the headline two party preferred breaking the 48 point barrier for the first time since February 2011.
While the carbon tax is often described as the single driver for the government’s recovery in public support, if we zoom in to the period since March this year, we find a slightly more complicated story.
The ALP actually bottomed out around the end of May/beginning of June, before recovering slowly through the rest of June and July. It wasn’t until August – a full month after the carbon tax came in and Whyalla remained on the map – that a sharp change in the acceleration of the recovery appeared, and one that has continued to this day. This is actually the norm of the way major public policy events impact upon polling. It takes time for information to be heard, then absorbed and finally politically processed by peoples noggins. Most people slow cook information that is relevant to their political opinion.
On the primary votes, we’ve seen an even stronger recovery for the government than with the headline two party preferred – the ALP now breaking the 36 point barrier for the first time since November 2010.
Some of this is coming off the Coalition, with their primary vote now the lowest it’s been since March 2011.
However, also boosting the government’s primary support is the fading of the Green vote – slipping through the double digit barrier for the first time this term – and reaching trend lows not seen since the first quarter of 2010, before the last election.
Currently, the point estimates on the trend lines come in like this:
Let’s talk about Horse Race commentary – that breathless hyperbole about changes in public opinion that’s generated every time a new poll comes out. We have armies of allegedly intelligent people -the top political writers in Australian journalism – writing column miles of allegedly serious analysis about how the latest poll did this or that and what it means for Abbott, Gillard, the nation etc. This analysis then dominates the news cycle in such a way that ordinarily intelligent politicians jump on the bandwagon and react to it, which then generates a brand new news cycle of the reaction, then further breathless reporting and analysis of the reaction to the reaction. And so on it goes until everyone disappears up their own meta-sphincter, by which time a new poll is released and the whole batshit crazy process starts again.
Here’s something to chew over – the actual underlying content of this whole circus is little more than random numbers behaving randomly.
Let me explain.
To start with, here’s what the ALP two party preferred looks like with our Pollytrend against all the poll results.
The poll to poll movements (which here include Essential Report, Newspoll, Nielsen, Galaxy and Morgan’s phone polling) are extremely noisy – but it’s what we expect. Polls are noisy because they’re actually probability distributions trying to capture underlying reality, with a mean of the headline result and standard deviation related to their sample size. It’s also worth looking at the poll to poll change vs the change in Pollytrend, to highlight the noise involved.
It’s not unusual to get poll to poll movements of up to 6 points between polls, when the underlying trend hasn’t moved at all. It’s also a good lesson for why you should only ever compare poll movements ‘like with like’ across time – Newspoll with Newspoll, Essential with Essential etc.
What our trend measure does is cut through that noise. Not only the noise created by sampling error – what you see described as the margin of error of a given poll, but also the relative leans of each pollster. Some polls lean slightly towards the ALP and some slightly towards the Coalition relative to each other, in a fairly consistent manner.
If we take the difference between each pollsters results and the equivalent Pollytrend estimate for the same time the poll was in the field, what we would expect to see if both our trend measure is an accurate trend and the polls behave as polls theoretically should, is a nice neat series of normal distributions (think bell curves) that not only show the relative lean of each pollster, but the spread of their polling results relative to the trend over time. In fact, that is actually what we do see:
This is a histogram where the bars show how often each pollster produces a result X number of points away from the Pollytrend. The lines are the hypothetical normal distributions associated with the spread of each pollster.
Because there aren’t enough Nielsen, Galaxy and Morgan phone polls to robustly measure their individual distributions, I pooled them together. Consequently, it’s not really worth saying much about them except that collectively they lean 0.4% on average to the Coalition compared to Pollytrend.
Essential Report, being a pollster that uses a rolling two week average for its polls, produces a much tighter looking bell curve as a result of that particular methodology. Averaging out of two weeks knocks a fair bit of noise out of the system. It leans 0.6 points towards the ALP relative to Pollytrend.
Finally, Newspoll has a relative lean towards the ALP of 0.6 points. Also worth noting that I have a handful of unpublished polls that go into the trend line – they’re all phone polls from academic and commercial research that use political cross tabs. They have a relative lean of 0.3 points towards the Coalition.
Importantly, Essential Report and Newspoll produce results that are statistically indistinguishable from the type of normal distribution we would expect them to have if they were behaving in a perfectly functioning theoretical manner.
So this tell us that both our trend and the pollsters are operating as they theoretically should be – producing results with the error sizes we would expect, as often as we would expect, as a consequence of the type of random sampling that polling is based on.
Now let’s look at how Essential Report and Newspoll track Pollytrend.
Now let’s turn those charts into the poll result to poll result change, and measuring it against the Pollytrend to Pollytrend change for the same periods.
First Essential Report:
As a result of the rolling average Essential uses, it knocks a lot of noise of the system, so we usually only see 1 point poll to poll movements if any movement at all. The occasional 2 point movement appears (4 times this term) and only once have we seen a three point movement.
Now let’s look at Newspoll.
Here we see Newspoll regularly having two, three and four point movements poll to poll, and 3 occasions having movements over 4 points. Again, this is exactly what we expect a poll with a sample size a bit over 1000 to behave like.
Now let’s go a step further and highlight size of the Horserace commentary polling problem. Let’s measure the difference between Newspoll’s poll to poll movement and the difference between the equivalent trend to trend movement e.g If Newspoll goes from 46 to 49 (3 points) and the trend goes from 46 to 47 (1 point), then the difference between them is 2 points i.e. Newspoll over the period moved 2 points more than the underlying change in public opinion actually moved according to the trend.
Similarly, if Newspoll moved from 48 to 46 ( a change of -2 points) and the trend moved over the same period from 47 to 48 (a change of +1 point), the difference between them is 3 points i.e. Newspoll over the period moved 3 points more than the underlying change in public opinion actually moved according to the trend.
This chart shows the size of the movement in Newspoll *over and above* the way public opinion actually moved during the polling periods, with three, four and five point random movements being regular occurrences.
Now think of all those breathless stories – “ALP CRASHES 4 POINTS BECAUSE OF SOMETHING I’VE JUST MADE UP, “NEWSPOLL SURGE RENEWS LEADERSHIP SPECULATION”, “COALITION VOTE PLUMETS BECAUSE OF CHANGES TO THE ACTS INTERPRETATION ACT” etc etc
None of it is true – all of it is based on people reporting random numbers, or people reacting to people reporting random numbers.
They’re not just random numbers because I’m saying it either – they’re random numbers because that’s what the maths tells us. Theoretically, we would expect the distribution of these results to be a normal distribution (think bell curve) with a mean of zero and about 2 Newspoll “margins of error” wide each side. Actually it would be just under 2 margins of error wide, but there would be a little bit extra added because of rounding issues – Newspoll publishes their results to the nearest whole percentage point. Since Newspoll has an MoE of 3 points, we’d expect the distribution to be about 6 points wide each side of the mean.
This is what we actually see.
Well look at that, we have a normal distribution (using a Jarque–Bera test), with a mean of zero and a spread 2 margins of error wide.
This is the mechanics of Horserace commentary.
Breathlessly reporting random numbers as fundamentally important issues that drive our media and political system down the road to utter absurdity.
Imagine if half our political coverage was based on the Lotto numbers. If this week’s Lotto results summed to be 224, about 40 above the expected mean – would everyone piss half their week up the wall opining over what this random event means for Tony Abbott and Julia Gillard? Would politicians fall over themselves to react to the reporting of these random numbers? Would every second political story be framed through the prism of a higher than expected sum of the Lotto results?
Of course it fucking wouldn’t!
They wouldn’t do it with random numbers like Lotto, but they sure as shit do it with random numbers in polling. Every week, week in week out.
Looking deep into one’s navel about what a set of random numbers mean for politics doesn’t make you look clever –it actually makes you look like an idiot.
Finally, a lot of folks have asked if we can do the equivalent here of what Nate Silver does in the US. The answer is both Yes and No – not only because of the fundamentally different political systems between our two countries (like the US having separate executives and legislatures), but also because of the massive difference in the polling and types of polling in each country.
It’s perhaps best explained if rather than ask whether we can do that here, if we look instead at what Nate would have to do if he had our data to work with, rather than the flood of state and national polls that the US enjoys.
If Nate had to deal with the equivalent of our polling and system, he’d have to predict which party would achieve a majority in the House of Congress (and perhaps the size of the majority) using 3 regular and two irregular national tracking polls, and where the only state polls undertaken were in fact groupings of 10 or so states combined, of varying sizes (with no breakdowns between the states making up any particular group), and where a poll came out for each of those groups only once every 2 or 3 months.
It’s a bit different down here.