PeriodicaIly I add an installment to my series on economic indicators. In these posts I combine important selection principles with examples of some of my conclusions. The methods I cite today are all seriously flawed, despite the widespread attention they draw.
Citi Economic Surprise Index
The Citigroup Economic Surprise Index shows a composite of indicators versus expectations. This is popularly interpreted as a measure of how well the economy is doing. Here is an example of what we typically see.
This interpretation is completely wrong as Bespoke makes clear in this post. A more accurate interpretation would be the accuracy of the economists making forecasts! If the weather is better (or worse) than the forecast, do we say that it “beat expectations?”
Citi itself has warned about misuse of this indicator. It was originally cooked up by their FX unit. From the FT (emphasis in original).
The Citi Economic Surprise Index is a perfect example of unique proprietary design which has almost no bearing on those who discuss it. The models were built by quantitative analysts in Citi’s FX unit and were structured for currency trading. Thus, if the CESI wiggles one way or another, investors get signals to buy the yen or the euro or the loonie, etc. It was not meant to be used for stock prices or for Treasuries, but coincident rather than causal relationships are relied on even if they have no consistency whatsoever.
Since this is a proprietary indicator, we have no idea what is included, how the components are weighted, or whether elements change over time. It has no relevance for the analysis of stocks and bonds.
The Markit PMI reports illustrate another important principle:
When the information would be of great value, the consumer of economics overlooks many flaws.
Global economic growth meets this test: It is important and most standard reports are not timely.
The concept of gather information from each country, using informed sources and identical questions, is a good one. So what is wrong?
- The results are from surveys. There is nothing inherently wrong with this, but the usefulness of the data depends on how the survey is conducted.
- Most importantly, how is the sample determined?
- Are the participants knowledgeable?
- How many respondents are included and what is the overall population size?
- What is the margin of error?
- Does the question wording influence the response?
- Is the wording really the same when translated into various languages?
- The results have not been tested. The Markit PMI versions will need at least several business cycles before we can make reliable inferences.
- Those surveyed may not provide honest or objective answers.
This approach is riding the coattails of the Institute for Supply Management’s manufacturing survey. While it has several of the drawbacks listed above, it avoids most of them. We know the questions, something about the motivations of the participants, and there is a long history to analyze.
My estimate of the margin of error in these surveys is +/- 5%. Major trading conclusions are reached based upon a “miss” of a point or two. Put another way, do you think that 800 survey subjects in China is enough for a good sample?
I have never even reported these results in my WTWA series. A starting point would be a public description of the methodology.
The Economic Cycle Research Institute built an excellent forecasting record based on the work of co-founder Geoffrey H. Moore. While the exact history is murky, the current generation of administrators replaced several of Moore’s indicators with market-based alternatives. These were intended to provide faster information and avoid any revisions.
It is not so easy. Markets “revise” data with rapid moves when better information appears. The advantage of speed and accuracy is an illusion.
Putting aside the history, the company moved from a publicly stated method to a secret one. This leads to various problems:
- The consumer cannot evaluate the underlying logic or reasonableness of the approach.
- There is no way to know when the methodology changes.
- There is no check for accuracy. Academic works often have errors that are caught by reviewers. One of the most important recent books had major errors because some spreadsheet cells were left out.
When we cannot rely upon confidence in the method, including its development and testing, we can only go by the track record. The ECRI recession call of 2011 was a major error, compounded when they stubbornly adhered to the forecast for five years.
The ECRI is apparently well-connected since they get plenty of publicity, nearly all of which is favorable. They have the power to do so much good, if they would make a few changes. I wish they would hire some real economists, reduce the number of variables in the model to cut back on multi-collinearity, and go public with their method. This would place them in the tradition of other leading experts on the business cycle, whose work is published, reviewed, and verified.