Livemint wants to start sending you push notifications. Click allow to subscribe
Subscribe
My Reads e-paper Newsletters IFSC Code Finder New MintGenie
e-paper
OPEN APP
Home / Companies / News /  Facebook’s AI put ‘primates’ label on videos depicting black men

Facebook’s AI put ‘primates’ label on videos depicting black men

This isn’t the first time an AI algorithm from Facebook has been found faltering, nor is it the first time that AI algorithms have been found to have racial biases.

  • The company called it an unacceptable error and said that it knows that its AI is not perfect and there’s still progress to make

NEW DELHI: Social media giant Facebook’s artificial intelligence (AI) algorithms have put the company in a tough spot again. According to a report by The New York Times, an AI-based recommendation tool asked users watching a video featuring Black men whether they want to “keep seeing videos about Primate". The company has since disabled the feature and apologised for the error.

NEW DELHI: Social media giant Facebook’s artificial intelligence (AI) algorithms have put the company in a tough spot again. According to a report by The New York Times, an AI-based recommendation tool asked users watching a video featuring Black men whether they want to “keep seeing videos about Primate". The company has since disabled the feature and apologised for the error.

The company called it an “unacceptable error" and said that it knows that its AI is “not perfect" and there’s still “progress to make". “We apologize to anyone who may have seen these offensive recommendations," a spokesperson for the company said. The video in question was posted by British tabloid The Daily Mail and showed disputes between white police officers, civilians and black men.

The company called it an “unacceptable error" and said that it knows that its AI is “not perfect" and there’s still “progress to make". “We apologize to anyone who may have seen these offensive recommendations," a spokesperson for the company said. The video in question was posted by British tabloid The Daily Mail and showed disputes between white police officers, civilians and black men.

Subscribe to Continue Reading

However, this isn’t the first time an AI algorithm from Facebook has been found faltering, nor is it the first time that AI algorithms have been found to have racial biases. In July this year, the company formed a new “equity and inclusion team" tasked with examining how its algorithms affect minority users, like Black, Hispanic and others.

Racial biases is also one of the chief arguments against AI used in facial recognition algorithms used by various police forces, government agencies and more. In December 2019, a study by the National Institute of Standards and Technology (NIST) found that such algorithms had “higher rates of false positives" for Asian and African American individuals in one-to-one matching, as compared to Caucasians. The NIST study found that the differentials could range in factors of 10 to 100 times from one algorithm to the next.

Google, one of the biggest players in the world of AI, has also faced pushback from researchers, academics and even users about biases in AI. In December last year, the company fired noted AI ethics researcher Timnit Gebru for pointing out flaws in some of its most advanced AI algorithms. Unlike Facebook's tool, Gebru pointed out flaws in Google's language learning models.

Never miss a story! Stay connected and informed with Mint. Download our App Now!!