(Aniruddha Chowdhury/Mint)
(Aniruddha Chowdhury/Mint)

How exit polls landed on a Modi return

Every pollster has predicted an NDA win. Is exit poll a fraud science or can one attach some credibility to it?

On 23 May, India would have a new government and exit pollsters would have their judgment day. The exit polls forecast anywhere between 267 (ABP-Nielsen) to 365 seats (India Today-Axis Poll) for the National Democratic Alliance (NDA) in the 2019 Lok Sabha elections. Such a wide variation in the range of seats and contradictory findings in many key states has created a brouhaha, with enough doubts being cast on the art of election night forecasting. The inability of pollsters to make accurate forecasts in the recent past (particularly, the 2004 Lok Sabha exit poll debacle) has added to the criticism of exit polls becoming an election night tamasha—nothing more than “a circus in town".

But an ordinary person can use a set of basic thumb rules to reasonably judge the quality of any exit poll. While TV channels may boast about the “biggest sample", what really matters is the representativeness of that sample and whether it mirrors the Indian electorate accurately (instead of, for example, the sample containing too many upper castes or urban dwellers). The second indicator is the direction of the various polls. Third, the vote share estimates of the top two parties and the gap between them. Last comes the polling organization’s past reputation, especially in calling close elections.

The biggest problem with the chaos of exit poll night, and the days which follow until results are announced, may be that not all polling agencies release all the above information. Only a seat count is put out. Unfortunately, TV channels and their polling agencies are in a mad rush to announce only the largest sample sizes and the number of constituencies covered. The quality of the data, even of a smaller size, will always outweigh the usefulness of huge sample sizes. Only Lokniti-CSDS and Cicero Associates put out a detailed methodological note. IPSOS and CVoter provide some information.

Vote-seat conversion

How does one make sense of the exit poll results for the 2019 Lok Sabha elections in times of such methodological opacity? To make the seat forecast, pollsters first process the data collected through election surveys and correct sample representativeness, in case there is a skew. The use of sampling weights to correct skewness in data has an established tradition in statistics. This is not data massaging or manipulation, as it is sometimes called. Statisticians then use mathematical models to arrive at vote estimates and seat predictions. This may sound odd, but by definition, “estimate" and “prediction" can only be ballpark-range figures. In a recent interview, Prannoy Roy, who pioneered opinion polling in India, put this point very succinctly: “When you get something spot on, it’s bound to be a bit of fluke. The methodology doesn’t allow you to get anywhere but within 20 seats of the final result."

The whole process of generating data from the field is long, tiresome, cost-intensive, and often messy. This is not something that is immediately apparent to an average person when the whole exercise is boiled down on TV screens to a few numbers. Also, news anchors do not want to lecture viewers about “margin of error" and how the change of a few percentage points, which often happens, will turn the results upside down.

To be fair, however, pollsters in India often succeed in estimating the direction of the winner right, but they often fail to get close to seat estimates. The reason for going wrong on the seat count, which could happen in this election too, is because it is hard to correctly predict the number of seats despite getting the vote shares right. For example, the Congress won 19.5% votes in 2014 but could win only 44 seats, while the Bharatiya Janata Party (BJP) in 2009 won 116 seats with an 18.8% vote share (see chart 1a). In some cases, parties with lower vote shares manage to win more seats due to a concentration of their votes in specific regions (see chart 1b). Also, political parties in India win and lose by a few percentage points. In such a scenario, even with fairly large sample surveys, if one accounts for the margin of error, statistically, often the numbers are undifferentiable.

Past performance

Interestingly, for the first time, most polling agencies have published their internal vote share estimates in 2019. But how did they perform in the last five years in the various assembly elections conducted between 2014 and 2019 for which they put out seat projections? (see chart 2).

The matrix suggests that pollsters in India have done a fairly decent job in getting the winner right, even if not the exact seat count, except when the vote share gap between top two parties was extremely close (for example, Rajasthan assembly elections 2018). Their performance is also rather poor in predicting outliers (such as the Aam Aadmi Party winning 67 of 70 seats in Delhi).

Landslides like Delhi 2015 are an outlier, and no pollster is likely to make that kind of seat estimate even when their vote estimates point in that direction. Pollsters in India are generally risk-averse, and they have remained conservative in their seat estimates. There is a good reason for this. Polling in India became common only in the 1990s, in the era of fractured mandates. The 2014 results and a series of assembly elections in a number of states where single party majorities are common have broken the psychological glass ceiling. And this is evident in the 2019 exit poll forecasts, as many pollsters have gone bullish on the possible outcome.

Contrary to the popular perception that exit polls are often wrong, Prannoy Roy and Dorab R. Sopariwala in their book The Verdict: Decoding India’s Elections analyse 833 opinion and exit polls conducted in India since 1980 and show that the success rate of exit polls is higher in comparison to opinion polls (pre-poll surveys), and both are much better in predicting the outcome of the Lok Sabha elections than state elections. In fact, exit polls of reputed organizations have a very high rate of getting the winner right. Despite their limitation, and there are many, polls conducted by reputed organizations are far better at capturing the broad trend of an election than drawing room discussions, gossip among election travellers, and hearsay from politically motivated citizens.

Philip Oldenburg in a very insightful paper Pollsters, Pundits and a Mandate to Rule: Interpreting India’s 1984 Parliamentary Elections explains why nearly all political commentators missed the 1984 Rajiv Gandhi wave. He argues that there is a triad of journalists, “knowledgeable persons" and political leaders who form a common sense about the election outcome. They keep speaking to one another and reinforcing each other’s reading. This common sense wisdom is then stitched together by political analysts and commentators, and transmitted through the media. A pollster bypasses this triad and speaks directly to the voter, and thus has a better chance of getting to the truth.

The silent voter

For long, polls used to overestimate the BJP’s vote share and underestimate parties like the Bahujan Samaj Party (BSP). This used to be the case because the BJP voters were typically more forthcoming in talking to interviewers, whereas the BSP voters were apprehensive (or were less likely to be approached by interviewers as most of them were from the marginalized sections of Indian society). This in-built bias gave rise to the famed theory of “silent voters" overturning elections—which has acquired some currency in this election season too. Election surveys in India continue to make lots of mistakes, but they seem to have overcome this particular “silent voter" bias at least. For the last few elections, they have been correct in estimating the BSP’s vote share in Uttar Pradesh and its time that the proponents of “silent voter" theory revisit this claim. Pollsters are also aware that voters often lie about their voting preference in polls. Many polling agencies have developed sophisticated tools to win voters’ confidence in recording their preference and also to detect lies. Polls, however, still underestimate minor parties (those who win 2-5% votes at the state level). A good pollster accounts for this discrepancy while making final estimates.

On 23 May, some pollsters would be correct, and a few may get it horribly wrong. However, as most polls have converged on the direction and the eventual winner, it is implausible that the actual results would be a flip of the forecasts.

The author is a fellow at the Centre for Policy Research and has extensive experience in working with election surveys.