The trouble with ‘inflation expectations’
- Design principles for a Digital India
- Venture Catalysts buys 10% stake in Chai Break for Rs5 crore
- BofAML India’s Kaku Nakhate: Investors are dialling back into the India story
- ReNew Power may buy Ostro Energy in Rs10,000 crore deal
- Buenos Aires WTO Summit: EU seeks India cooperation on food security issue
Writing in this newspaper in May 2015 Praveen Chakravarty (a founding trustee of IndiaSpend and a former investment banker) had noted that Reserve Bank of India (RBI) governor Raghuram Rajan had unfailingly referred to “inflation expectations” in each of his monetary policy review statements. Fifteen months on, Rajan hasn’t disappointed, taking care to mention the term in each of his subsequent monetary policy statements.
In his latest statement on Tuesday, Rajan made a reference to “household inflation expectations remain[ing] elevated” while announcing the decision to keep rates unchanged.
While inflation expectations remaining elevated may not be the sole reason behind the decision to hold rates, the continued references to the phrase show that it plays an important role in the determination of India’s monetary policy.
For a concept of such importance, however, there are several concerns regarding how the value is arrived at.
“Inflation expectation” is not a single number. It comes out of a quarterly survey conducted by the RBI, where about 5,000 people are polled on whether they expect prices to increase over three-month and one-year time-frames. Respondents are also asked how much they expect prices to go up by, and the average of these numbers is widely reported as the “inflation expectations” number (http://bit.ly/2aIJ4JA).
The trouble with the survey begins with sampling—while 5,000 is usually a large enough sample size to produce credible survey results, the problem is that respondents are not randomly chosen—in fact, we don’t know how they are chosen.
There are quotas allocated to different categories of professions—30% of respondents, for example, are “housewives”, while 10% are financial sector employees (see chart 1).
Choosing the survey sample by category is a fairly standard practice (it is called “stratified sampling”), but the problem arises when data of respondents from different categories are combined in providing summary statistics. (The only exception is if the shares of these categories among the respondents exactly mirrors the shares of these categories in the overall population. This is clearly not true here, since financial sector employees don’t account for 10% of adult urban population, for example.)
Next, we don’t know how the survey was conducted. The latest survey document (http://bit.ly/2bgtR2F ), for example, makes no attempt to tell us how the respondents were identified, and what precise questions were asked—all we have are statistics based on the answers received.
The reason why the publication of survey questionnaires is important is that the framing of a question can have a large bearing on how people answer. You can get respondents to give you the answer you want based on the way you frame and order the questions. Thus, without the survey questionnaire, it is impossible to determine if the reported results actually conform to what the respondents said.
Then, there is the complexity of the topic itself. While the concept of “price rise” or “inflation” might be well understood, it is not an easy task to frame questions in a way that can be answered unambiguously.
Out of the data presented from the latest survey, for example, it is clear that questions have been asked not only on the “velocity” of prices (rate at which they move, or the “first derivative”) but also on “acceleration” (whether they are expected to move faster than before, or the “second derivative”). In order to answer the latter question, the respondent not only needs to have an idea on whether the prices will increase, but also an idea of how much they will increase by, and how that compares to the current rate of increase (“current rate of increase” is also an ill-defined term for a survey, but we will explore that on another day). This is a hard enough question for experts, so the information gained from asking it to lay people is hard to understand.
The survey goes on to ask people for their actual expectations of inflation in the next three-month and one-year time-frames (this gives the popular “inflation expectations” number). The problem again is about rate measurement—most people would be hard-pressed to answer correctly how much their expenses have gone up by in the preceding few months, so asking them to estimate the rate at which prices will increase in the future is less likely to get a credible answer.
There is another problem—inflation is an annualized number. It is unclear how the survey gleans people’s expectations of three-month inflation numbers.
A survey of people’s expectations is also highly prone to bias, in that people are likely to focus too much on “spectacular” price changes (i.e. sharp increases and decreases) and ignore commodities where prices change little. People are also unlikely to have a good idea of the precise weights of different components in their consumption basket, adding to noise in the data.
A good “sense check” of the credibility of a survey is whether respondents give sensible answers to fact-based questions (for opinion-based questions, there are no “right answers”), and whether answers are internally consistent.
While it is hard to comment on the internal consistency of the inflation expectations survey based on the data available, the presence of fact-based questions such as current inflation rates allow us to judge the survey’s credibility (see chart 2).
As the chart shows, a large number of respondents believe that the current rate of inflation (however it is defined—the ambiguity occurs on account of ambiguity in the time period on which inflation is measured) is greater than 16%, which is wrong by a long way. While we cannot expect most lay people to know what the actual inflation rate is, the fact that they report a number far away from reality subtracts credibility from the survey.
Chakravarty mentions in his piece that “the true merit of the survey lies not in the accuracy of the common woman’s inflation expectations but in the trends of their expectations. Trends matter while levels don’t, argues Rajan”.
Then again, with people’s estimates of current inflation levels being so far off the mark from reality, it is doubtful if any real insight can be drawn out of the trend.
Given its ubiquity in Rajan’s monetary policy statements, it is clear that the inflation expectations survey is here to stay. Yet, in light of the above issues (the list is by no means exhaustive), it needs radical overhaul if it needs to remain credible.
Firstly, the RBI should make public the method for selecting the 5,000 respondents, and the precise questionnaire that was administered to them.
Secondly, answers by people from different categories should not be combined, for such a combination will not have appropriate weight.
Thirdly (this is a leap of faith, making assumptions on what the current questionnaire is like), questions need to be made simpler and more fact-based rather than opinion-based. Asking questions on the second derivative of prices (“will prices increase at a higher rate than currently”), for example, is utterly pointless, as is expecting respondents to give their expectations of inflation to the accuracy of one percentage point.
With 43 rounds having been completed, we are likely to have got all the answers we could have from the current questionnaire, and this is as good a time as any to completely overhaul this survey and make it more meaningful.
It is not desirable to base important decisions such as monetary policy on a flawed survey.