OPEN APP
Home >Tech-news >News >Watch list: Disruptive tech from the world over

Can driverless cars overtake humans?

Short answer: If we allow them to.

Recently Elon Musk, chief executive of Tesla Motors, SpaceX and chairman of SolarCity, used Twitter to reach out to software engineers to ramp up Tesla’s Autopilot software team. “Should mention that I will be interviewing people personally and Autopilot reports directly to me. This is a super high priority," one of his tweets read. Autopilot is Tesla’s self-driving feature. And it’s a super high priority for a lot of executives in other companies too—in the legacy automobile makers, such as Audi, BMW and GM; those who are attacking the market from the side, such as Google and Uber; as well as tech and automobile companies that are joining hands, like Microsoft and Volvo.

Google cars have done 1.2 million miles of autonomous driving (equivalent to 90 years of driving experience) without once getting a ticket. But a Google car was pulled over recently for driving too slowly, going at 24 (miles per hour) (mph) on a 35 mph lane. From a public perspective, this might well go down as only being human, and a nudge towards greater acceptance.

But how soon will that be? Mark Fields, CEO of Ford (which incidentally is using a fake city, a 32-acre faux town in Michigan, to get unlimited time to test-drive its cars), is probably one of the most optimistic about self-driving cars. He says it would hit American roads in four years. Fields might be underestimating the regulatory hurdles. Earlier, Musk said jumping these hurdles could take anywhere between one and five years.

Some resistance will come from those who are directly affected—cab drivers. Uber is already thinking about giving vocational training to its drivers to do other jobs. But it’s possible that the technology, despite Google’s 1.2 million miles, has not yet come to that bridge. Self-driving cars are yet to learn the hundreds of signals human beings—car drivers, pedestrians, cyclists, motor bikers—send, receive and interpret that make driving both safe and accepted. It’s true that technology is advancing at an exponential rate in these areas, but the question of perception still remains.

As Fumihiko Ike, chairman of Honda, told ‘The Japan Times’ recently: “Human intelligence has no equal for working out what is happening on the road, so I think fundamentally it won’t be easy to leave it to the machine except in very restricted conditions such as motorways or specific routes." It’s worth remembering that in technology, advances happen in small steps and it might seem like nothing is happening before we find ourselves in a different world.

Should we be worried about cryptocurrencies?

Short answer: Even if it’s not big yet, yes.

Soon after the recent terrorist attacks in Paris, there were reports of how terrorists might have used encryption—easy and cheap to access these days—to communicate. It turned out they were using good old SMS. Now, attention has turned to cryptocurrencies. ‘Reuters’ reported that European Union countries plan to go after virtual currencies and anonymous payments made online to curb the flow of funds to terrorist organizations.

These fears are hardly new. There are reasons why cryptocurrencies are used for shady activities—like buying drugs, gambling and terrorist funding. They can be transacted anonymously; they are not restricted by national borders—a bitcoin is as valid in Europe as it is in America; they can be transferred as immediately as cash; and of course they are low-cost and easy to use.

That doesn’t, of course, mean ISIS and other terror organizations have been using it extensively. Ghost Security Group, an antiterrorism hacker group, says ISIS does have bitcoins, and that most of its funds come from “traditional" sources—oil sales, kidnapping, extortion. It also doesn’t mean cryptocurrencies might turn out to be a major source of their future funding. What it does highlight, however, is that there is always a trade-off between comfort that a new technology would bring to millions and the security issues it will open up because it will provide the same comforts to those with bad intentions.

Should we be worried about Artificial Intelligence (AI)?

Short answer: Definitely.

“When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success."

So said physicist J. Robert Oppenheimer, speaking about the atomic bomb. He was quoted in a long ‘New Yorker’ profile of Nick Bostrom, author of ‘Superintelligence: Paths, Dangers, Strategies’, and the Oxford professor who runs the Future of Humanity Institute. This institute studies large-scale risks to human civilization and is one of the recipients of Musk’s funding aimed at understanding and preventing risks from AI. The title of the profile: ‘The Doomsday Invention’.

AI is progressing fast. Matthew Lai, a master’ student at Imperial College London, recently created an AI machine that has taught itself to play chess at the International Master level. And the time it took to do this: 72 hours.

An AI program developed by Japan’s National Institute of Informatics has passed a college entrance exam, scoring above national average, but not high enough for its top institute, the University of Tokyo. By 2021 it would, its developers say.

To top it all, Los Angeles-based AI firm Humai is aiming to resurrect human beings within the next 30 years. These words from its website could well be out of science fiction: “We’re using artificial intelligence and nanotechnology to store data of conversational styles, behavioural patterns, thought processes and information about how your body functions from the inside-out. This data will be coded into multiple sensor technologies, which will be built into an artificial body with the brain of a deceased human. Using cloning technology, we will restore the brain as it matures."

Realistically speaking, what are the chances that AI will turn out to be all that its fans and critics say it would be? ‘The New Yorker’ profile quotes a survey by Richard Sutton, a Canadian computer scientist: “There is a
ten-per-cent chance that A.I. will never be achieved, but a twenty-five-per-cent chance that it will arrive by 2030. The median response in Bostrom’s poll gives a fifty-fifty chance that human-level A.I. would be attained by 2050." For many of us, within our lifetimes, that is.

Read an unabridged version on foundingfuel.com

Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.

Click here to read the Mint ePaperMint is now on Telegram. Join Mint channel in your Telegram and stay updated with the latest business news.

Close
×
Edit Profile
My ReadsRedeem a Gift CardLogout