To have a positive view of life, you need to preserve a high degree of trust both in the people around you and in your understanding of events. Uncertainty is the enemy of trust, and it undermines our faith in our existential understanding. We need to be able to rely on our perception of the way things are to make effective decisions. This is true of all aspects of life; as such, it pertains to the economics and business spheres no less than to any others.
For companies to make effective investment plans, for the Bank of England’s Monetary Policy Committee to make appropriate interest rate decisions, for the government to make sustainable expenditure projections, for households to make affordable consumption decisions, they have to have a reliable understanding of what is happening in the economy. That is the reason that the Office for National Statistics plays such a vital role in our lives. That is not to say, for most people, it is an institution that they recognise as having a significant impact on their general wellbeing. But it does. It provides the information that helps determine pay increases, it provides the background that prompts changes in the cost of debt, and in numerous other ways it generates the inputs to decisions that influence our day-to-day lives.
So, it is vital that there is an essential degree of trust maintained between the ONS and virtually every other decision taker, large or small, nationally and internationally.
The task facing the ONS is onerous and by no means simple. The economy is complex and often unpredictable. And it is not always clear how the ONS should evaluate variables that we consider key to our interpretation of where we are, where we have come from and where we are headed. Nonetheless, it is important to investigate the occasions when initial estimates of economics variables turn out to be incorrect. In part this is because, as with any individual, company or institution, there are times when the ONS should be held to account. More importantly, however, through scrutinising errors, we can begin to understand when and where data provided by the ONS may have a greater degree of unreliability.
It is reasonable to classify the situations in which economics statistics turn out to be ‘unreliable’ into four categories. The first relates to occasions where subsequent information changes the estimates made on the basis of initial, incomplete evidence. The second is when statistical and accounting methodologies change, giving rise to different outcomes. The third category comprises how information is interpreted by the ONS. Fourth, there are simple calculation errors. While these are discrete problems, they all have one consequence: that our currently prevailing view of past developments in the economy can be different, almost beyond recognition, when compared to what we understood at the time.
I can point to numerous occasions on which this has happened. Most often (and this implies a degree of bias in initially released data), circumstances have turned out to be better than we thought at the time. So, it has been normal for GDP data to be revised higher, leading to stronger growth estimates than initially published. The most recent instance of real importance relates to 2012. The initial growth estimate for the year was 0.0% (or -0.03% to two decimal places). Five years later, the ONS tells us that growth was, in fact, 1.5%. By the way, the ONS claims that, following improvements in its measurement and methodological techniques, its estimation errors are now random. I remain to be convinced.
But it is not only ‘headline’ GDP data that are revised. Incorporated within the headline changes are the adjustments to numerous other elements relating to output, demand and incomes. The relevance of this in current circumstances is that the ONS has recently incorporated significant methodological ‘improvements’ into the UK’s national statistics. These have changed, sometimes quite dramatically, the trends in a number of our key economics variables. Take the household sector’s savings ratio. We had already seen adjustments to the data which had amended significantly the position prior to the recession. Remember when, for 2008, the saving ratio sank to just 2.0%? Even before the latest data revisions, that had been revised up to 5.4%, but we are now told that it was actually 7.5%, only a little lower than the average for the previous ten years. The first estimate for the savings ratio in 2016 was 5.2%, but this is now put at 7.1%. This matters, not just because it changes the way in which we should interpret recent economic history, but also because it shakes our confidence in the reliability of currently published numbers.
The statistical news is not always good, I should add. While the current account deficit already looked bad, before the latest updates, the position now seems a magnitude worse. For 2016, the deficit had been published at £84.5bn or 4.4% of GDP; in the latest estimates, this has increased to £113.7bn, or 5.8% of GDP.
These are but two examples of data series that seem to sit on shifting sands. But they highlight a major problem, one that has significant implications for economic performance. Our perception of the way in which the economy is performing informs many of our actions. If we do not have accurate economics data, how can we expect companies, policy makers and individuals to take appropriate decisions in key areas such as capital investment, interest rates and borrowing? It also impacts public services, inasmuch as spending plans are based on what the government believes to be sustainable on the basis of underlying economic performance.
The ONS does not have an easy task. Even allowing for this, however, it is hard to reach any other conclusion but that the unreliability of some of our principle economics series most likely results in worse decisions being taken. And if this is true, unreliable data may be judged as undermining the UK’s long-term growth potential.