OPEN APP
Home >Lounge >Business Of Life >The hits and misses of using Artificial intelligence for recruitment
Using AI to predict human and social behaviour will always be flawed because humans aren’t all that predictable. Photo: iStock
Using AI to predict human and social behaviour will always be flawed because humans aren’t all that predictable. Photo: iStock

The hits and misses of using Artificial intelligence for recruitment

  • Facial recognition is another problem that has been solved and has wide applications that touch everyday lives, including unlocking one’s smartphone

Artificial intelligence (AI) has been occupying some of the best minds of this century, but the hype around it is just as massive. AI is entering everyday lives and products, and many of us find ourselves in positions, where we need to evaluate the genuineness of claims to using AI. If we can’t separate the hype from the truth, we’d end up spending money on fake products and services.

Over the years that I have spent with startups, I’ve come across both genuine and fake AI products. I’ll start with the ones that truly solved problems using AI.

A few years ago, one of the co-founders of Liv.ai, a Bengaluru-based AI startup, met me and demonstrated their product that used natural language processing to convert speech to text in multiple Indian languages. I had always known that text to speech was easy, but converting speech to text in multiple languages was a hard problem to solve. I was a bit sceptical at first, but when I saw the product, I was quite blown away. Before I could think of recommending it to someone who would see this startup as a great investment opportunity, Flipkart acquired it and built a shopping assistant, Saathi, with text and voice interface to support shoppers in the smaller towns.

Facial recognition is another problem that has been solved and has wide applications that touch everyday lives, including unlocking one’s smartphone.

Now, I come to what I call fake products riding the AI wave. A vendor once approached us, claiming their product could predict criminal tendency in an individual with an accuracy of 60%, and suggested we use this tool to evaluate our delivery boys. When we deep dived into it and used additional data on prevalence of criminal tendencies in society, we found the accuracy of the test dropped from 60% to 5%. Do you need anything else to decide whether you should pay this vendor and run all your new hires through a test like this? Another AI vendor once bragged that their tool could look at a job description, evaluate 100 CVs and pick the best five suited for the job. When we asked how, they resorted to deep jargon: “We use a deep learning algorithm". When we tested the tool and got it to look at 100-odd CVs and shortlist the best five, there was a zero match with what a good recruiter and hiring manager with years of experience had shortlisted. Claims like these and tools built on poor tech are the ones that give AI a bad name in India. Arvind Narayanan, a computer science professor at Princeton University, puts it more succinctly, “Much of what’s being sold as ‘AI’ today is snake oil — it does not and cannot work. Why is this happening? How can we recognize flawed AI claims and push back?"

Prof. Narayanan has classified AI into three broad buckets: areas where AI is genuine and making rapid progress like facial recognition, medical diagnosis, reverse image search; areas that are imperfect but improving, like spam detection, hate speech and copyright violation; and fundamentally dubious areas like predicting job success, recidivism, at-risk children.

The last category, which is about predicting social outcomes, is essentially the snake oil being sold to gullible users and used as a pretext for collecting a large amount of data. Prof. Narayanan writes there has been no real improvement in the third category, despite how much data you throw at it. He adds that for predicting social outcomes, AI is worse off than manual scoring using a few features.

Using AI to predict human and social behaviour will always be flawed because humans aren’t all that predictable. They’re individualistic, and their behaviours can’t always be reduced to data points.

The protagonists of predicting social outcomes will no doubt claim it is only a matter of time before AI gets better. I believe this is untrue. Some things can get better with time, but some ideas have inherent limitations.

T.N. Hari is head of human resources at Bigbasket.com and adviser to several venture capital firms and startups. He is the co-author of Saying No To Jugaad: The Making Of BigBasket.

Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.

Click here to read the Mint ePaperMint is now on Telegram. Join Mint channel in your Telegram and stay updated with the latest business news.

Close
×
Edit Profile
My Reads Redeem a Gift Card Logout