Investors are consumers of economic data. If you are a wise consumer, you know what to emphasize and what to ignore. In this series on indicators I continue my effort to separate wheat from chaff.
Are Regional Fed Indices Useful?
For many years I have downplayed the regional Fed surveys of business with the occasional exceptions of Chicago and Philly. Looking at the reasons provides some criteria we can also apply to other indicators.
The Market Bias Against Surveys
Market participants and media alike distrust information from surveys. There are no survey experts among the regulars and little discussion of what is important. Any perceived failures get widespread attention. This is often reflected in discussions of “hard” versus “soft” data, particularly when the comparison favors the viewpoint offered. I have a little quiz on this at the end of the post.
The Contributions of Surveys
For some topics, surveys provide better and faster information than any other approach. If we want to know how people feel about the economy, their job prospects, the most important issues, housing availability or many other questions, the survey approach is essential.
Often neglected is that perceptions, even if inaccurate, may drive both personal decision-making and public policy. The survey is the only way to ferret out perceptions.
Limitations of Surveys
A principal limitation is that we cannot know the honesty and decisiveness of the response. Intentions are a moving target. The wording of questions can make a big difference.
These are merely examples, of course. The problems are well-known and survey experts have learned methods to reduce these errors. Here are some common examples.
Some surveys include a shifting panel of respondents. This enables the researcher to track changes over time in individual answers.
The phrasing of questions can be tested with alternative versions on similar groups.
Importantly for market participants, factual questions are more likely to get solid answers than those about intentions. Asking whether business is better this month than last, for example, can be trusted more than asking about likely business investment in the next year.
In summary, surveys vary widely in how much we can trust them, even if they meet standards of technical excellence. Lumping them together is a lazy oversight.
The Most Important Criteria
There are some simple ways to evaluate a survey:
- The single most important aspect of a survey, and the least honored, is getting a representative sample. Without this, your results are meaningless. Or worse, deceptive.
- Non-sampling error occurs when the respondents are not representative for any of several reasons – systematic non responses, omission from the target group, and difficulty in reaching certain respondents are common examples.
- The sample size is important since it helps determine the sampling error or confidence interval of the results.
- The response rate is important. It affects the confidence interval and also might introduce a non-sampling bias. Enthusiastic survey subjects might be more likely to respond, for example.
Sources of Good Surveys
Most people do not know the actual source of data or the expertise of those developing it. In the case of the U.S. government, the survey experts are in the Census Bureau. They often help other agencies like the BLS.
The University of Michigan has a long and storied tradition in the field, including the earliest voting studies. Researchers developed important techniques, fixed common problems, improved interview protocols, and trained generations of new experts.
Other universities also have excellent survey programs, of course, but are not as involved in regularly producing business data.
Private polling firms have established teams of survey experts. Their work usually represents a high standard, although some become identified with a political viewpoint. This can influence the work.
Polling analysts are, like us, consumers of polling data. They are useful because they are informed and expert consumers. The 538 is a good example.
Applying the Criteria
For purposes of illustration, let’s use the Philly Fed Index. It is highly respected and has the potential to move markets. It is one of the best of the regional surveys. And yet….
- Participation is voluntary, with forms sent to all businesses.
- Nothing is reported about the size of the “sample” or the confidence interval.
- Nothing is known about the response rate.
The Proof is in the Results
My most important test of a survey is whether it demonstrates a long-term relationship with important “hard” data. The Philly Fed, according to a 2003 analysis, has encouragingly high correlations with industrial production, as does the ISM manufacturing report and the Chicago PMI. These are the only regional surveys I follow. One difficulty in meeting the statistical results test is that many surveys have not been around that long. Here is the 2003 indicator correlation matrix.
The media uncritically reports economic data, including explicit surveys. You do not see confidence intervals (sometimes because they are not there). The expectations reported for surveys are a mystery. How would one even begin to forecast these results? I suspect it is like the football polls a few years ago where the trainer was submitting the results instead of the coach!
And finally, here is a little quiz on hard versus soft data. In each of the following cases, please decide whether the report uses a survey. The answers are here. This exercise will definitely help you as a consumer of economic data.
- Payroll jobs
- ISM manufacturing index
- Wholesale inventories
- Retail sales
- Unemployment rate
- Labor participation rate
- ISM non-manufacturing index
- Consumer confidence or sentiment
- Business optimism
- Durable goods orders
- New home sales
- Building permits
- Personal income and spending
- The decennial census of U.S. population
- Existing home sales
- JOLTS report
- Business inventories
- Housing starts
- Regional Fed indexes – Chicago, Empire State, Philly Fed, Dallas, etc.