Here’s an exercise from your slightly advanced statistics book. You toss a coin 11 times and you get nine heads. Probability theory tells us that we can say with more than 95% confidence that this is a biased coin. If you live in India and have access to the Internet, newspapers or to television you know where we are going with this. That’s right, the Sydney Cricket Ground.
We woke early mornings for five days, saw absorbing cricket being played and saw umpires make mistake after mistake. We were told by cricket pundits and players from Australia how it was a magnificent or an epic match. When we said the match result was severely compromised by atrocious umpiring, we were told these were human errors. Then we saw a statement from the International Cricket Council (ICC) that the umpires had a poor day and Steve Bucknor would stand in the third Test as well. It took a lot of protest from India for ICC to stand him down. Even then ICC emphasized that bias of umpires was not an issue.
The point of this tirade? Why is it so tough for ICC and, to a lesser extent, the Australian cricket establishment (there are suggestions they are the same thing) to agree that there is a prima facie case that the umpires were biased and that the issue needs to be examined further?
Was it just incompetence, or was it bias? The umpires made a large number of decisions, virtually, on every appeal. They got an overwhelming number of these all but 11 correct. Assuming lack of bias, each team should have an equal chance to be unfortunate on each of the wrong calls. But India got nine of these. That is less than 5% chance of such an event if there were no bias.
Mind you, a bias does not need to be because of illegal monetary inducement, but could come from incidents/influences in a person’s background. We have been cheated by auto rickshaw drivers in New Delhi and we think that has biased us against people of New Delhi, in general. The rational part of our mind tells us that we have encountered a small sample of drivers and that to let these incidents influence our perception of the average Delhi person is not correct. However, we are aware that we carry the bias. We are not the correct persons to umpire a Delhi-Mumbai cricket match, for instance.
Going back to the coin, what would you do to make sure the coin was really biased or not, assuming it was important to know for sure? You would toss it again. Or you would look at what has happened in the past when the coin was tossed. You would check whether this was an isolated case of 11 tosses amidst a record of 100 others. You would examine the past track record of the coin.
Unfortunately, the coin analogy can only take us so far.
For one, there were three coins, er... umpires, at play. Second, getting a decision wrong is not a 50-50 thing. The probability of getting the Rahul Dravid decision wrong or the third umpire stumping incorrect is surely less than 5%? Even for the least controversial of the bad decisions—Wasim Jaffer out on a no-ball not spotted by the umpire—the probability of an umpire getting it wrong cannot be 50% as it raises too many issues on the basic competence of the umpires.
Unfortunately, it is beyond the authors to devise a statistical test to spot umpiring bias in the time it took to knock out this article. And we definitely don’t have the data for it. But isn’t this what ICC supposed to do? Check if the umpires are incompetent and/or biased and try to improve them.
Saying that the umpires “had a bad day” is an insult to any thinking person. We guess in ICC’s code of conduct such insults are fine as long as they are not racist.
Yogesh Upadhyaya and M.R. Madhavan are cricket fans who believe that debates should be based on facts. Comment at email@example.com