Home >Industry >Infotech >Teaching smart robots to hold a cup properly

Mumbai: If research firm Gartner Inc. gets its forecast right, by 2018, more than 3 million workers globally will be supervised by a “robo boss" who will increasingly make decisions that previously could only have been made by a human manager.

Indeed, robots are getting increasingly smarter and today can perform surgeries, space missions, solve a Rubik’s cube, help senior citizens, clean your rooms, and even make pancakes.

A Chinese robot, according to a 6 May report by the Press Trust of India, is set to use artificial intelligence to compete with grade 12 students during the country’s national college entrance examination next year and get a score qualifying it to enter first-class universities.

In July 2014, scientists at Cornell University led by Ashutosh Saxena developed Robo Brain—a large computational system that learns from publicly available Internet resources.

And since January 2010, scientists at the Carnegie Mellon University (CMU) have been working to build a never-ending machine learning system that acquires the ability to extract structured information from unstructured Web pages. If successful, the scientists say it will result in a knowledge base (or relational database) of structured information that mirrors the content of the Web. They call this system the never-ending language learner, or NELL.

Another case in point is Google Inc.’s driverless cars that use sensors, radars, cameras and harness computing power to figure out routes and avoid collisions. This is an example of cloud robotics, given that a significant part of the computation and data processing is done by the teams back at Google’s server farms.

What may come as a surprise, though, is that robots are very awkward when it comes to the simple act of walking, which infants can do with ease.

A case in point is the Defense Advanced Research Projects Agency’s (DARPA) global competition that is held annually to design robots that can perform dangerous rescue work after nuclear accidents, earthquakes and tsunamis. In the June 2015 global competition, just three out of 23 teams managed to complete all eight tasks set for the robots, which included driving and exiting a vehicle, opening and going through a door, locating and opening a valve, using a tool to cut a hole in a wall, removing an electrical plug from a socket and putting it in a different socket, traversing rubble and climbing stairs.

In the case of human-like (better known as Androids) social or assistive robots, even an advanced robot such as Asimo took years to learn to walk without tripping, something infants learn rapidly due to the highly evolved and complex human brain.

Moreover, most robots still can’t manage the simple act of grasping a pencil and spinning it around to get a solid grip because tasks such as these, which require dexterous in-hand manipulation--rolling, pivoting, bending, sensing friction and other things that humans do effortlessly—have proved notoriously difficult for robots.

Now, a University of Washington (UW) team of computer science and engineering researchers has built a robot hand that can not only perform dexterous manipulation but also learn from its own experience without needing humans to direct it. Their latest results are detailed in a paper to be presented at the IEEE International Conference on Robotics and Automation on 17 May.

“Hand manipulation is one of the hardest problems that roboticists have to solve," said lead author Vikash Kumar, a UW doctoral student in computer science and engineering, in a 9 May press statement. “A lot of robots today have pretty capable arms but the hand is as simple as a suction cup or maybe a claw or a gripper."

The UW research team spent years custom-building one of the most highly capable five-fingered robot hands in the world, and used machine learning algorithms to model both the basic physics involved and plan which actions the robotic hand should take to achieve the desired result.

Building a dexterous, five-fingered robot hand poses challenges, both in design and control. The first involved building a mechanical hand with enough speed, strength responsiveness and flexibility to mimic basic behaviours of a human hand.

The UW’s dexterous robot hand—which the team built at a cost of roughly $300,000—uses a Shadow Hand skeleton actuated with a custom pneumatic system and can move faster than a human hand. It is too expensive for routine commercial or industrial use, but it allows the researchers to push core technologies and test innovative control strategies.

The team first developed algorithms that allowed a computer to model highly complex five-fingered behaviours and plan movements to achieve different outcomes such as typing on a keyboard or dropping and catching a stick. The research team, then, transferred the models to work on the actual five-fingered hand hardware, which never proves to be exactly the same as a simulated scenario.

So far, the team has demonstrated local learning with the hardware system, which means the hand can continue to improve at a discrete task that involves manipulating the same object in roughly the same way. Next steps include beginning to demonstrate global learning—which means the hand could figure out how to manipulate an unfamiliar object or a new scenario it hasn’t encountered before.

Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.

Never miss a story! Stay connected and informed with Mint. Download our App Now!!

Edit Profile
My ReadsRedeem a Gift CardLogout