Remembering back to last month when the unemployment figures were released by the ABS showing a fall from 5.7 to 5.4%, the howls of incredulity from economic firms that guessed wrong on this was deafening. The ABS unemployment figures are actually derived from a poll, albeit an enormous survey of around 41100 people, and it was pretty much lambasted across the economic sector as a “rogue” result and a consequence of the ABS reducing their sample size. The reporting of the issue was just mind numbingly dodgy.
So the poor old ABS got slapped around for weeks on how their small sample size of 41100 was creating all this enormous volatility and uncertainty in their figures, and as a result of this hysterical (and let it be said, mostly ridiculous) gnashing of teeth and stomping of tootsies by private sector economic firms – the ABS has now restored their Labor Force Survey to run on a sample of around 54400.
To point out just how silly all this bleating has been over the last few weeks, we can use our new toy The Poll Cruncher to highlight how most of the commentary on this hasn’t been so much as clueless, but more a case of simply misappropriating the plot.
If we plug in 41100 as the sample size and 5.7 as the result for Poll 1, the margin of error on this survey comes out as 0.22%.
If we plug in the larger sample size that the ABS used to run and will now do so again, a sample size of 54400 with the same result of 5.7 as Poll 2 – the margin of error is 0.19%.
All this outrage has been over a structural improvement of 0.03% in the accuracy of the raw results!
Let’s get something straight here – this new sample size will make exactly three fifths of five eighths of sweet fark all difference to the unemployment figures. The trend is what matters, and this chicken feed increase in accuracy won’t make an ounce of difference to the trend estimates.
There is a law of diminishing returns when it comes to the sample size/accuracy nexus for surveys, where the marginal increase in accuracy (say, the increase in accuracy for every 1000 increase in the sample size) continually decreases as the sample size increases. When we get up into the tens of thousands as a sample size, the juice becomes barely worth the squeeze.
People have to realise that sometimes survey results hang out on the fringes of reality through no fault of their own – it is just how samples work in practice. The best we can do is increase the sample size so that the distance between where the definition of “the fringe” sits compared to the true results is small – but a 0.03 reduction in that distance is not going to change anything. If 5.4 was considered “rogue”, would 5.43 be considered any less of an outlier?
Of course not, but that’s the size of the distance we’re talking when we look realistically at the new change in sample size.
One of two things was at play here – either the poll was an outlier or the poll wasn’t. There was around a 5% chance that it was an outlier and if that was indeed the case, then we can’t really say anything about what the true figures were except that it’s highly likely to be somewhere around the fringes of either the 5.4 or 6% mark. If we had a larger sample size, it wouldn’t have made any material difference to the results.
If the poll wasn’t an outlier – there is still a chance that the true unemployment result was somewhere between zero movement and a small increase. If we now go back to the Poll Cruncher and change both sample sizes to 41100 (reflecting what actually occurred) and plug 5.4% in as the result for Poll 2, as well as 0 and 5 into our Min and Max values for Poll 2 , we get a probability of there actually being a zero to positive increase in unemployment over the period of 3%.
Worth mentioning is that we can use any large positive number instead of 5 here in The Poll Cruncher, as we are trying to find the probability of the increase being greater than or equal to zero rather than between any two given values.
No doubt, come next Thursday when the Labour Force Survey results for May are released, there will be a large gaggle of talking heads in the serious media waxing lyrical about how their criticisms of the ABS were justified (if unemployment goes up) or how there’s something wrong with the figures again (if unemployment stays the same or goes down).
The one thing we can probably all be sure of is that nearly every piece of commentary on the methodology of the unemployment figures will again be spurious nonsense – at least if last month was any yardstick to go by. I often wonder whether the talking heads have ceased selling their firms nouse as the product, and where the product they are trying to flog has simply become the sound of their own voice.