The real problem with productivity
When it comes to productivity, only two things are undebatable: that the official rate of US productivity growth has stalled since 2007, having started to slow before then, and that there is no consensus about why or what to do about it. There is, additionally, some broad consensus that without stronger productivity growth going forward, standards of living will not improve appreciably, which is likely to fuel the current wave of populist discontent.
One explanation, however, is increasingly popular even as it faces considerable scepticism among economists and policymakers: that the problem is less about productivity than about our inability to measure the effect of the digital and now data revolution that has redefined the American economy. In short, there is a growing chasm between what our economic system is and what our numbers are capable of measuring.
Take Google. Its searches are used billions of times a day. Every single one is free. The same could be said of Google Maps or Waze, which are free for the user. While some of what they offer adds little to collective economic output (a group chat between a gaggle of teens has no immediate economic value), a considerable percentage does. That navigation app reduces time spent on the road or stuck in traffic, potentially reduces the amount of gas used, and then frees up that time and savings for other, possible more productive uses.
Several years ago, Erik Brynjolfsson, a Massachusetts Institute of Technology economist, tried to measure what these “free goods of the Internet” might be adding to gross domestic product (GDP). The methods were innovative, trying to gauge what value people assign to their time and then multiplying that by time spent using Google and similar services. He estimated that as of 2012, such “free goods” might add $300 billion to gross domestic product, increasing at the rate of $40 billion a year, which would mean close to $500 billion in 2017. These were only halting initial steps in what is surely a complicated and as yet unresolved process to factor the innovations of the past decade and more into calculations of economic output and activity. In early May, the US Bureau of Economic Analysis released a paper concluding that its own measures of inflation and GDP had been unable to keep up with the changes in the economy, and hence had been off by as much as half a per cent a year.
These issues are not new, but they remain unresolved. The lessons of the Federal Reserve in the 1990s are instructive. The US economy was booming, new technologies were proliferating, and yet productivity numbers were anaemic. Then governor Alan Greenspan tasked the team of economists at the Fed to investigate. Building on the 1989 observation of Robert Solow that “computers were everywhere but in the productivity statistics”, the Fed began to assess how productivity was calculated and understood. That led to more emphasis on different formulas such as multi-factor productivity, which went beyond looking just at labour and capital investment; they also took a longer view that new innovations can take years to show up in official statistics.
Measured productivity did begin to accelerate in the mid-1990s, along with greater attention by policymakers to different formulas such as multi-factor productivity to measure it. That said, the debates today haven’t altered much, with a few voices suggesting that we fail to account for the “consumer surplus” or adequately account for the gains from the digital revolution, while many others, such as Robert Gordon, contend that the productivity slowdown is a result of a mature economy that is not keeping pace with societal needs. For them, the statistics, even if slightly outmoded, reflect an unarguable reality whose economic and social consequences are evident. Even those who acknowledge underestimation of productivity tend to argue that if you added back some amount for the hard-to-quantify effects of the digital revolution, you still wouldn’t get back to the levels of the 20th century.
Perhaps. Or perhaps the mismeasurement debated is only a portion of just how significant these mismeasurements are. Even more, perhaps the entire framework is now flawed. The hard numbers today are failing to account for certain observable contradictions—such as how there can be high levels of employment combined with very little wage growth and extremely low inflation. If various free or inexpensive digital solutions are generating adequate output without adding much in the way of labour costs or capital spending, then that would explain why labour costs and capital investment are low. And if those solutions are also leading to less expensive goods and services, that would in part explain why measured productivity is weak.
All this suggests that much more attention needs to be paid to the real possibility that productivity isn’t slowing the way we think, or that slower measured productivity isn’t having the same consequences as when the economy was primarily based on making physical goods. Perhaps if we emphasized quantity and quality rather than market price, the optics would be different. Governments seem unwilling to allocate resources to developing a system for better accounting for free services and how the deflationary effects of technology can both improve standards of living and lower GDP. But if we are going to understand the causes of inequality and formulate solutions, we need to start with data we can count on. Bloomberg
Zachary Karabell is the head of global strategies at Envestnet Inc.