Bengaluru: As India's second mission to the moon, Chandrayaan-2, is poised to make a soft landing on the southern surface of the moon in the early hours of Saturday, the artificial intelligence (AI)-powered rover of the Indian Space Research Organization (ISRO), will play a significant role in this mission's success.
Christened 'Pragyan' (wisdom in Sanskrit), the homegrown solar-powered robotic vehicle that will manoeuvre the lunar surface on six wheels, comprises a Laser Induced Breakdown Spectroscope (LIBS) from the Laboratory for Electro Optic Systems (LEOS) in Bengaluru to identify elements present near the landing site, and an Alpha Particle Induced X-ray Spectroscope (APIXS) from the Physical Research Laboratory (PRL) in Ahmedabad that will inspect the composition of the elements near the landing site.
The AI-powered Pragyan rover, which can communicate only with the Lander, includes a piece of motion technology developed by IIT-Kanpur researchers that will help the rover manoeuvre on the surface of the moon and aid in landing. The algorithm will help the rover trace water and other minerals on the lunar surface, and also send pictures for research and examination.
Homegrown Pragyan is simply a case in point that AI has been gathering pace in space exploration over the years.
For instance, Earth Observing-1 (EO-1)--a decommissioned (in March 2017) National Aeronautics and Space Administration (NASA) Earth observation satellite--was built 19 years back, to develop and validate a number of instrument and spacecraft bus breakthrough technologies. Among other tasks, EO-1 was also used to test new software like the Autonomous Sciencecraft Experiment, which allowed the spacecraft to decide for itself how best to create a desired image.
Similarly, along with the the JPL Artificial Intelligence group at the California Institute of Technology (Caltech), the Institute of Astronomy-University of Hawaii has developed a software system called SKy Image Cataloging and Analysis Tool (SKICAT) that incorporates the latest in the AI technology including machine learning and machine-assisted discovery in a bid to automatically catalog and measure sources detected in the sky survey images--to classify them as stars or galaxies and assist an astronomer in performing scientific analyses of the resulting object catalogs.
Other AI systems have helped astronomers identify numerous possible gravitational lenses that play a crucial role in connection with research into dark matter.
AI’s ability to sift through humungous amounts of data and find correlations helps in intelligently analysing that data. ESA’s ENVISAT produced around 400 terabytes of data every year. On the other hand, astronomers estimate that the Square Kilometre Array (SKA)--an international effort to build the world’s largest radio telescope located in both the South Africa’s Karoo region and Western Australia’s Murchison Shire--will generate 35,000-DVDs-worth of data every second, equivalent to data that the internet produces daily. How would such mountains of data be analysed if it's not for AI?
AI is also being used for trajectory and payload optimization. An AI known as AEGIS is already on the red planet onboard NASA’s current rovers. The system can handle autonomous targeting of cameras and choose what to investigate. However, the next generation of AIs will be able to control vehicles, autonomously assist with study selection, and dynamically schedule and perform scientific tasks.
NASA is planning to launch the James Webb Space Telescope into an orbit of around 1.5 million kilometers from Earth in 2020. Part of the mission will involve AI-empowered autonomous systems overseeing the full deployment of the telescope’s 705-kilo mirror.
Meanwhile, AI is taking giant steps in space with AI-powered astronauts too.
Crew Interactive Mobile CompaniON (CIMON), the first artificial intelligence (AI)-based astronaut assistance system returned to Earth on 27 August, after spending 14 months on the International Space Station (ISS). CIMON was developed by Airbus, in partnership with IBM, for the German Aerospace Center (DLR). It is a floating computer that was described as a flying brain by members of the Airbus team.
CIMON is able to show and explain (voice-controlled) information, instructions for scientific experiments and repairs, helping the astronauts to keep both hands free. CIMON can also be used as a mobile camera to save astronaut crew time. Its 'eyes' are a stereo camera used for orientation, as well as a high-resolution camera for facial recognition and two additional, lateral cameras for imaging and video documentation. Ultrasonic sensors measure distances to detect potential collisions. CIMON's 'ears' comprise eight microphones used to detect the direction of sound sources and an additional directional microphone for good voice recognition. Its mouth is a loudspeaker that can be used to speak or play music. Twelve internal rotors allow CIMON to move and revolve freely in all directions. This means it can turn towards the astronaut when addressed. It can also nod or shake its head and follow the astronaut either autonomously or on command.
"The astronaut has control over CIMON at all times, which was especially important for us," said LMU researcher Judith Buchheim in a press statement on 28 August. A successor model of the technology experiment with extended functionality is currently being built and tested by Airbus on behalf of the DLR Space Administration. The 'second' CIMON also uses IBM 'Watson' AI technology.