The first chairman of the National Statistical Commission (NSC), which is now being made a permanent body, Suresh D. Tendulkar, is one of India’s most respected economists. A member of the Prime Minister’s Economic Advisory Council and a former member of the Fifth Pay Commission, he has taught at the Indian Statistical Institute and, since 1978, at the Delhi School of Economics. Entrusted with the job of overhauling the country’s statistical system, Tendulkar, 68, says for good data, the government should invest in resources. Edited excerpts:

What are the main tasks before the commission and what indicators are you looking at in the medium term?

Monitoring and review of the statistical system has been an ongoing process. One of the terms of reference for an earlier commission, which is still vital, is to ensure horizontal and vertical coordination of the statistical system—first, between Central ministries and second between states. This system has not been functioning well since the 1990s. We’re trying to revive as well as strengthen that. This is the most important task as I see.

Like our polity, our statistical system, too, is a decentralized system. I am trying to persuade states to set up similar bodies like the commission. I cannot order them because ours is a federal system, so we seek their cooperation. That goes for ministries too. Of course, there is a statistical adviser in every ministry, and now we have one in every state capital, in the NSSO (National Sample Survey Organization) division, to ensure coordination. He has been asked to keep in touch with the state statistical bureaus.

Numbers man: Suresh D. Tendulkar, chairman of the National Statistical Commission.

The committee of the Central and state statistical organizations is also being used as a major forum. Earlier, its meetings used to be every two years; we’ve made it yearly.

In this set-up, the first major task before us is to critically look at the indicators that are most valuable to the economic system and how to make them more efficient and up to date. The second is to give a statutory shape to NSC, by lookingat the experience of other countries.

How flexible and recent is our data system? Can it adjust to our fast-changing economic scenario?

The fast changing part is fairly recent, maybe five years or so. You’re right that data should be up to date, but this would be dependent on the kind of parameters chosen and their stability, as well as the extent to which one can compile, process and release the data.

Overall, speed has improved. For instance, the wholesale price index (WPI) is now being released with three weeks’ lag and the final figure with nine weeks’ lag. The national accounts are on a fairly well-established timetable. As for the index of industrial production (IIP), when the industrial structure is changing very fast, we need to change the base of the index, which has become very outdated. But the big problem here is in getting the data directly from producers and other associations.

While producers are prompt in seeking government help if production is falling, when it comes to giving the information on production, they are not so prompt. The Collection of Statistics Act was earlier just applicable to ASI (Annual Survey of Industries) data, now it is being amended to cover all industrial data. But it is just a facilitating measure; you cannot compel anybody to give data. If you use force, the producer may give some data on time but that may not be reliable or accurate. It is not possible to have a full-time machinery just to pursue the data sources every month. The units are spread across the country, data keep trickling in and coverage improves from provisional to final.

So it has to be equally obligatory on the producers’ part to give the data on time. And that is something I want to emphasize. IIP is an externality as far as the producers are concerned. Through this, they get to know what others are doing and it helps them plan their own output. But at the same time, they are reluctant to divulge their own production, as competition is becoming more and more fierce. If everybody thinks like this, IIP would become unreliable. So, in the case of WPI and IIP, we’re trying to both improve/expand the items and the reporting.

Why do we have four versions of the national accounts statistics (NAS)?

Same reason, updation. Data always trickles in. Sometimes they also undergo revisions and then these revisions are commented upon. The final estimates of the national accounts are always based on ASI data because that is much more reliable. IIP is used initially in NAS because it is available faster. Then, after ASI data become available, that is incorporated.

Sometimes, there have been big differences between the quick and revised estimates of GDP...

There is never a significant difference between these.

Isn’t the difference between 9% and 9.4% GDP growth significant?

Just as sample survey estimates are subject to sampling error, NAS is also subject to some errors. How wide that gap is an important issue. Still, the aggregate error is noteasy to estimate. Clearly, whether one can treat the difference between 9% and 9.4% as a significant one is a goodquestion.

A lot of data in India is generated in the unorganized sector. Could it not be that we’re not able to capture a large part of our GDP and, therefore, growth?

You’re right. Because of this policy of giving importance to small industry, we have a large number of such units; their birth and death rates are high and they don’t maintain accounts. Getting the data from them is not easy. The ASI data itself is available with a two-year lag (the latest one pertains to 2004-05).

So, for the small sector, we usually undertake benchmark surveys and match them with some economic indicators. These are the so-called indirect estimates. This is kind of inevitable. So we have the economic census, then we have the follow-up surveys, which go on to update NAS.

Could there be a huge gap in capturing the household sector?

Household consumption is estimated by the NSSO by using the common kitchen and blood relation method. In NAS, this includes unincorporated enterprises and non-profit institutions serving households. All that gets aggregated in the private final consumption expenditure, which is the backbone of NAS.

NAS is a synthetic estimate, compiled from different sources. We have had a pilot survey of the non-profit sector. The other difference between the two is the reporting method. Then there is a difference of the method of estimation. While NSSO gets direct reporting from households, NAS uses the commodity balance method or the residual method based on production data. Bulk of the data comes from this method, but that is in the nature of things. On GDP, we’re trying to improve the direct estimates as much as possible.

In agricultural statistics, do you think we have data collection problems here, leading to overestimation or underestimation?

You must remember that agriculture is a state subject. The Rangarajan Commission had noted this problem. The village land records and land utilization data work is not undertaken with seriousness. Those functionaries are burdened with other work. So primary data recording is in a shambles, there is no doubt about that. I’ve been meeting state governments and trying to impress upon them this problem, but clearly, this is as far it can go. It is not easy to assess under-reporting or over-reporting. In a drought year, the state has an inducement to under-report, while in other cases, it may over-report. If you want accurate data, you also need to devote resources and well-trained people with specific responsibilities.

In such a situation, are private data companies a solution? Like CMIE (Centre for Monitoring Indian Economy) now sources data for industrial production.

CMIE does that because the industry ministry was not able to handle it. Private companies are generating data. But official data is under the scanner all the time, subjected to criticism and subjected to verification. That is not the case with private data. In NSSO, we are now releasing unit-level data, which is being subjected to all kinds of analysis and criticism. There is complete transparency, which shows up the limitations too. Till such things happen with private data, I wouldn’t put them on par.

Outsourcing is a possibility. There is a case for cooperation, but we need to lay down the rules clearly for the agencies. The memorandum of understanding becomes quite important in that case.

Would you say the Indian data system is not as dependable or robust as it used to be earlier?

The dependability does not rest with the compiler alone. In the case of sample surveys, we are finding resistance from the respondents. If people want good data, they must respond to the surveyors. It’s a mutual responsibility.

I have also been arguing that for good data, the government has to invest resources. Agriculture departments in the states, for instance, are not interested in vesting the ground level functionaries with primary responsibility of collecting data. They are burdening them with other work. The seriousness has to come at that level.

Since I took over, I have been emphasizing primary reporting. Unless that is good, computerization cannot improve reliability. It is the governments’ responsibility to provide trained, whole-time statisticians to do the job.