×
Home Companies Industry Politics Money Opinion LoungeMultimedia Science Education Sports TechnologyConsumerSpecialsMint on Sunday
×

A quest for software that listens for lies

A quest for software that listens for lies
Comment E-mail Print Share
First Published: Mon, Dec 05 2011. 12 43 AM IST

Novel concept: Dan Jurafsky of Stanford is among those who have been teaching computers how to spot the patterns of emotional speech—the kind that reflect deception, anger, friendliness and even flirt
Novel concept: Dan Jurafsky of Stanford is among those who have been teaching computers how to spot the patterns of emotional speech—the kind that reflect deception, anger, friendliness and even flirt
Updated: Mon, Dec 05 2011. 12 43 AM IST
She looks as innocuous as Miss Marple, Agatha Christie’s famous detective.
But also like Miss Marple, Julia Hirschberg, a professor of computer science at Columbia University, may spell trouble for a lot of liars.
Novel concept: Dan Jurafsky of Stanford is among those who have been teaching computers how to spot the patterns of emotional speech—the kind that reflect deception, anger, friendliness and even flirtation. Photo: Linda A. Cicero/The New York Times
That’s because Hirschberg is teaching computers how to spot deception—programming them to parse people’s speech for patterns that gauge whether they are being honest.
For this sort of lie detection, there’s no need to strap anyone into a machine. The person’s speech provides all the cues—loudness, changes in pitch, pauses between words, ums and ahs, nervous laughs and dozens of other tiny signs that can suggest a lie.
Hirschberg is not the only researcher using algorithms to trawl our utterances for evidence of our inner lives. A small band of linguists, engineers and computer scientists, among others, are busy training computers to recognize hallmarks of what they call emotional speech—talk that reflects deception, anger, friendliness and even flirtation.
Programmes that succeed at spotting these submerged emotions may someday have many practical uses: software that suggests when chief executives at public conferences may be straying from the truth; programmes at call centres that alert operators to irate customers on the line; or software at computerized matchmaking services that adds descriptives like “friendly” to usual ones like “single” and “female”.
The technology is becoming more accurate as labs share new building blocks, said Dan Jurafsky, a professor at Stanford whose research focuses on the understanding of language by both machines and humans. Recently, Jurafsky has been studying the language that people use in four-minute speed-dating sessions, analysing it for qualities such as friendliness and flirtatiousness. He is a winner of a MacArthur Foundation fellowship commonly called a “genius” award, and a coauthor of the textbook “Speech and Language Processing”.
“The scientific goal is to understand how our emotions are reflected in our speech,” Jurafsky said. “The engineering goal is to build better systems that understand these emotions.”
The programmes these researchers are developing aren’t likely to be used as evidence in a court of law. After all, even the use of polygraphs is highly contentious. But the new programmes are already doing better than people at some kinds of mind-reading.
Algorithms developed by Hirschberg and colleagues have been able to spot a liar 70% of the time in test situations, while people confronted with the same evidence had only 57% accuracy, Hirschberg said. The algorithms are based on an analysis of the ways people spoke in a research project when they lied or told the truth.
In interviews, for example, the participants were asked to press one pedal when they were lying about an activity, and another pedal when telling the truth. Afterward, the recordings were analysed for vocal features that might spell the deception.
For her continuing research, Hirschberg and two colleagues recently received a grant from the Air Force for nearly $1.5 million to develop algorithms to analyse English speakers and those who speak Arabic and Mandarin Chinese.
Shrikanth Narayanan, an engineering professor at the University of Southern California who also uses computer methods to analyse emotional speech, notes that some aspects of irate language are easy to spot. In marital counselling arguments, for instance, the word “you” is a lot more common than “I” when spouses blame each other for problems.
But homing in on the finer signs of emotions is tougher. “We are constantly trying to calculate pitch very accurately” to capture minute variations, he said.
His mathematical techniques use hundreds of cues from pitch, timing and intensity to distinguish between patterns of angry and non-angry speech.
His lab has also found ways to use vocal cues to spot inebriation, though it hasn't yet had luck in making its computers detect humour—a hard task for the machines, he said.
Elsewhere, Eileen Fitzpatrick, a professor of linguistics at Montclair State University in New Jersey, and her colleague Joan Bachenko are using computers to automatically spot clusters of words and phrases that may signal deception. In their research, they have been drawing on statements in court cases that were later shown to be lies.
David F. Larcker, an accounting professor at the Stanford Graduate School of Business, audited a course in computer linguistics taught by Jurafsky and then applied its methods to analyse the words of financial executives who made statements that were later disproved.
These executives were, it turned out, big users of “clearly”, “very clearly” and other terms that Joseph Williams, the late University of Chicago professor who wrote the textbook “Style”, branded as “trust me, stupid” words.
Larcker says he thinks computer linguistics may also be useful for shareholders and analysts, helping them mitigate risk by analysing executives’ words.
“From a portfolio manager’s perspective looking at 60 to 80 stocks, maybe such software could lead to some smart pruning,” he said. “It’s a practical thing. In this environment, with people a bit queasy about investments, it could be a valuable tool.”
©2011/ The New York times
feedback@livemint.com
Comment E-mail Print Share
First Published: Mon, Dec 05 2011. 12 43 AM IST