OPEN APP
Home / Opinion / Columns /  How to read our exit polls right

As state assembly elections come to a close, all eyes will be on exit polls conducted by different pollsters. If any pollster’s seat-victory prediction is broadly in line with the 10 March verdict, that pollster will be the toast of the day. The rest will be damned. If none of them gets it right, critics will immediately question the utility of conducting and broadcasting such polls. Some may even call for an outright ban on such surveys.

Such extreme reactions are unfortunate, and many of the attacks on pollsters are unfair. But pollsters have only themselves to blame for this state of affairs. Other than Lokniti-CSDS—which is a network of political scientists rather than a conventional polling agency—no pollster provides details of its methodology. How they conduct their surveys, what the margin of errors for vote-shares are, and the assumptions made while converting vote-share estimates into seat predictions all remain hidden. This opacity around methodology breeds suspicions. The lack of disclosure around their funding only serves to heighten those suspicions.

“The lack of polling transparency affects first and foremost pollsters who do their job with integrity, because it feeds the general perception that polling results can be easily bought or manufactured to suit political parties or other parties’ interests," said Gilles Verniers, co-director of the Trivedi Centre for Political Data, Ashoka University, over email. “It creates a blanket of deniability that protects bad players."

Opinion polls can go wrong even in countries with a long tradition of polling. In the UK, pollsters underestimated support for the Conservative Party in 2015, and then for Brexit in 2016. In the US, pollsters underestimated support for Donald Trump in two successive presidential races (2016 and 2020). They were roundly criticized in both countries, but there were no calls there for banning such polls. In the case of the UK and US, wrong forecasts are seen as errors. In India, a wrong forecast by a pollster is seen as evidence of fraud or a scam.

Apart from greater transparency, it is the presence of self-regulatory institutions that lends greater credibility to British and American pollsters. The British Polling Council (BPC) in the UK and American Association for Public Opinion Research (AAPOR) in the US both launched post-mortems to investigate the reasons behind recent poll debacles. These institutions help facilitate research on survey methods, lay down survey norms from time to time, and get experts to review polling methods.

In India, there was talk of setting up such a self-regulatory body in the aftermath of the Bihar 2015 assembly election, when pollsters faced flak for completely misreading the voter mood. But nothing came of it.

American journalist Eric Sevareid once compared American election reporters to alcoholics, saying that their vow to improve on the reporting of the last election is all but forgotten when the wine of a new campaign touches their lips. Much the same could be said about Indian pollsters.

The opacity of polling would not have mattered so much if opinion polls were getting better with time. But the accuracy of the average poll has not improved much. In a 2019 article for this newspaper, political scientist Rahul Verma presented some evidence to show that pollsters are getting better at predicting state election results. But Verma’s analysis was skewed by one big outlier, Axis My India Ltd, which has got most elections since 2015 right. Excluding Axis, the picture is not that bright, as Verma acknowledges.

Verma also points out that it is difficult to explain why Axis is an outlier, given that it shares so little about its sampling strategy. All we know about Axis is that it tries to draw a large representative sample for each constituency (while others do that for a state) and hence it is able to arrive at seat-shares directly from its sample (while others need to go through messy modelling to convert state-wise vote-shares to seat shares). This strategy seems to be working so far. But if it were to get its forecasts wrong next time, we would have no way of knowing what could have gone wrong.

Although pollsters don’t reveal as much as required to evaluate their work properly, their work still has value. Every pollster cares for its reputation, and hence tries to survey as representative a sample as its budget permits. Their findings carry more weight than newsroom or drawing-room speculation, as Yogendra Yadav once put it. Since the 1980s, exit polls have had a better record than pre-poll surveys in India, and they have been largely correct in predicting election winners, as Prannoy Roy and Dorab R. Sopariwala wrote in their 2019 book, The Verdict: Decoding India’s Elections.

As with any other statistical estimate, exit- poll results are subject to uncertainty. So they may be wrong for any given election, especially if there is a very close contest. On average, they may still be quite useful. A few basic thumb rules may help navigate poll results. One, if all of the exit polls for a particular state project the same party as the winner and there is a large vote-share gap between the projected winner and the runner-up, you may want to take that prediction seriously. Two, if pollsters are divided, you may want to consider their past record and level of transparency to sift through their forecasts. Three, if most polls suggest that the vote-share difference between the two main parties is very low, all bets are off.

Pramit Bhattacharya is a Chennai-based journalist. His Twitter handle is pramit_b

Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.
Close
Recommended For You
×
Edit Profile
Get alerts on WhatsApp
Set Preferences My ReadsFeedbackRedeem a Gift CardLogout