A month after statisticians from the National Sample Survey Office (NSSO) published a report exposing holes in one of the key databases used in India’s gross domestic product (GDP) calculations the controversy around India’s new GDP series refuses to die down, with several economic commentators continuing to raise questions on India’s GDP numbers and the official statistical machinery .
The data storm erupted after Mint first reported the findings of the NSSO report on 8 May. The report based on a field survey of services firms showed that 16.4% of companies in the MCA-21 database were either non-traceable or closed, and another 21.4% were ‘out of coverage’ or misclassified. These results meant that two detailed reports based on the first-of-its-kind survey had to be junked
A Mint analysis suggests that some of the concerns about the accuracy of India’s new GDP series stems from legacy problems with the national accounts system in the country, which were either left unaddressed or were aggravated during the base year change exercise in 2014-15, while others spring from methodological changes made during that exercise.
The revisions controversy
In January this year, the Ministry of Statistics and Programme Implementation (Mospi) published the revised growth estimates for fiscal 2017, raising growth for that year by 1.1 percentage points to 8.2 percent, the highest in a decade. Stunned economists were unable to fathom how such a large revision could take place in a year when demonetisation had sucked out 86 percent of the cash in circulation, and evidence from other sources indicated that the economy was hit even if temporarily by that move .
In a joint statement in March, 108 economists and social scientists highlighted this issue as one of the areas of concern, suggesting that the revision might have been affected because of ‘political considerations’.
There is no evidence as yet that there were indeed such considerations at play in this episode. However, it does show that an old problem with the GDP series --- of volatile revisions --- may have become worse.
In a first-of-its-kind analysis of revisions to GDP data, Amey Sapre of the Delhi-based National Institute of Public Finance and Policy (NIPFP) (a think tank funded by the finance ministry) and Rajeshwari Sengupta of the Mumbai-based Indira Gandhi Institute of Development Research (IGIDR) (a research institute funded by the Reserve Bank of India) showed that sectoral revisions for some sectors has historically been large and unpredictable.
Their research published by NIPFP as a working paper in 2017 suggested that the revisions (between advance estimates and revised estimates) in overall GDP numbers are relatively smaller in magnitude (and volatility) but noted that overall revisions tend to have an upward bias. Given data limitations, there is no separate analysis for the old and new series.
Nonetheless, the data presented by the scholars suggest that the revision between the first and second revised estimate in fiscal 2017 (1.1 percentage points) was indeed exceptional. The average difference between the first and second revised estimates between 2004-05 and 2015-16 was 0.3 percentage points.
Large revisions not only raise questions on credibility but also create challenges for policymakers. Since revised data is available much later, policymakers rely on initial (advance or provisional) estimates to take policy decisions. If advance estimates suggest weak growth, it can prompt policymakers to stimulate the economy. But revised estimates may suggest the opposite and by the time those estimates are available, the economy could be in the midst of a stimulus-driven inflationary spiral. Policymakers designing sectoral packages face even greater challenges.
The problem of revisions could have been managed better had there been transparency regarding revisions. Unfortunately, that is not the case in India.
“We...find a lack of information about the process of revisions as compared to international practices," wrote Sapre and Sengupta. “An ideal revision process undertaken (by) the national statistical agency should also contain a discussion on the relevance, reliability, and accuracy of the GDP estimates so as to convey a transparent picture to the various stakeholders."
The overestimation bias in informal sector growth
Ever since the new GDP series was introduced in 2015, critics have questioned the manner in which informal sector growth has been estimated. The assumption in the new series that the informal manufacturing sector grew at the same rate as the formal manufacturing sector (as measured by the Annual Survey of Industries or ASI) was also questioned.
Subsequent research by G.C. Manna, a senior adviser at the National Council of Applied Economic Research (NCAER) and a former director general at the Central Statistics Office (CSO), showed that there was indeed a wide divergence between growth rates reported by ASI for quasi-corporates (proprietary and partnership units) and those derived from informal sector surveys.
This suggests that the informal sector growth in the new series could be substantially inflated. This may also explain why the impact of demonetisation on the economy was not captured adequately in our official statistics.
The deflator controversy
The other source of controversy around the new GDP series has centred on the issue of deflators, which are used to separate the nominal growth in GDP from the real GDP growth (adjusting for inflation or deflation in prices). This is partly a legacy problem, but some critics argue that this problem may have worsened in the new series
In a 2016 oped article for Mint (See ‘Real GDP is growing at 5%, not 7.1%’, 14 March 2016) , Sengupta of IGIDR pointed out that the use of the wholesale price index (WPI) as deflator for several sectors of the economy (particularly services) is inappropriate, and if alternative deflators were used, the growth rate could come down by a couple of percentage points.
Other scholars have also pointed out that the use of a single deflator is problematic since it assumes that input and output prices move in the same manner. If that assumption is dropped and a ‘double-deflation’ method is applied to compute real GDP growth, the estimates could change significantly.
In a 2015 research paper, J Dennis Rajakumar and S.L. Shetty of the Economic and Political Weekly Research Foundation (EPWRF) showed that applying double deflation significantly lowered real gross value added (GVA) in the manufacturing sector for fiscal 2013 and fiscal 2014. In a critique of that paper, the Ahmedabad-based economist and member of the monetary policy committee, Ravindra H. Dholakia showed that using a more comprehensive set of deflators bumped up GVA growth in manufacturing in fiscal 2013 even while lowering it in fiscal 2014.
Manna arrived at similar conclusions about higher growth in one year and lower in another using data on input and output prices from ASI but his estimates showed that official figures overstated manufacturing growth in fiscal 2013 and understated it in fiscal 2014.
Even if there is no systematic bias because of single deflation, underestimating growth in one year and overestimating it in another can complicate the task of policymakers, who rely on real-time estimates to frame policies. It is not surprising therefore that the chief economist of the International Monetary Fund (IMF), Gita Gopinath, raised red flags over the way deflators are being used in India’s growth calculations and the lack of transparency around them.
The MCA-21 controversy
The most contentious aspect of the new GDP series has been the use of an untested corporate database, MCA-21 and the manner in which it has been plugged into the national accounts.
There are three main issues regarding MCA-21. The first relates to CSO’s assumption that non-reporting companies contribute positively to GVA growth, and hence a multiplier (or blow-up factor based on paid-up capital) is justified to account for the missing firms in the database. Several experts including the former chief statistician, Pronab Sen, have questioned the ‘blow-up’ methodology.
The second issue relates to the quality of the returns filed by companies, which according to some experts should not be directly plugged into national accounts estimation without cross-checks and validation, as is being done now (See ‘The unanswered questions in India’s GDP estimation’, Mint May 21 2019 ). Even a National Statistical Commission (NSC) committee raised red flags on these issues in its 2018 report on real sector statistics.
The third issue relates to the mis-classification problem. The use of the MCA-21 database has led to a situation where we don’t know clearly ‘how much is being produced in which sector and in which state’, in the words of Rajakumar of EPWRF.
The mis-classification of firms has important implications for economic policy. The policy response to a manufacturing slowdown can be different from a policy response to a slowdown in services. But if there is no way to tell from the data, policymakers will continue to have to rely on rough proxies and their intuition for important policy decisions.
The MCA-21 database also lacks state-wise details, which has ‘distorted’ the state-level GDP numbers, according to Dholakia, who is now heading a Mospi-appointed committee to improve the gross state domestic product (GSDP) estimates. The new series has led to huge swings in the fortunes of states.
The change has boosted the size of several state economies, easing their financing constraints even while raising such constraints for poorer states. This has created complications for the 15th Finance Commission, which is now trying to ‘reconcile’ the data, according to Finance Commission officials (See ‘How new GDP series has swung fortunes of states’, Mint May 15 2019).
While some of these issues can be resolved only in the next base-change exercise, greater transparency on the methodology and better data dissemination standards can help improve the credibility of the official GDP numbers. The CSO, which has now been merged with NSSO, can learn from the latter’s dissemination policies and start releasing unit-level data for all databases used in national accounts estimation (including MCA-21) in a machine readable format so that independent researchers can assess the quality of the data being fed into national accounts.
Nikita Kwatra from Mumbai contributed to this piece.