Google Play Store’s AI guardians are flagging more problematic apps than ever before
This could signal a welcome, albeit much delayed, change in the way the Google Play Store is monitored. With more than 2.8 million apps, Google’s Play Store for downloading apps on Android devices is the largest app store in the world. In an official post, the company revealed that it took down more than 700,000 apps that violated the Google Play policies, in 2017, which is 70% more than the apps taken down in 2016. Andrew Ahn, product manager, Google Play, wrote, “Not only did we remove more bad apps, we were able to identify and action against them earlier. In fact, 99% of apps with abusive contents were identified and rejected before anyone could install them.”
As it turns out, this was possible because of the use of artificial intelligence and machine learning techniques which are able to detect malware, security issues, inappropriate content and even apps that can be classified as “copycats” of more popular apps. The copycat apps impersonate popular apps, hijack traffic and potential revenue from the original apps and can also be the carrier of malware designed to steal data from your device. Google says that in 2017, it took down more than a quarter of a million of impersonating apps.
The artificial intelligence algorithms first sift through app submissions and flag potential risks and problems, which are later taken up by human researchers. Google suggests that since the launch of the Google Play Protect feature on Android devices, installations of potentially harmful applications got reduced by 50%. Play Protect is a built-in malware and harmful app detection software which is an add-on for the Play Store and regularly checks your device for any harmful apps. “Despite the new and enhanced detection capabilities that led to a record-high takedown of bad apps and malicious developers, we know a few still manage to evade and trick our layers of defense,” says Ahn.
In spite of such measures, some malicious apps still manage to evade the gamut of security checks that Google has in place. In August last year, Google had to remove 300 apps from the Play Store, which hid the WireX botnet malware and used the devices they installed to generate a wide-scale distributed denial of service (DDoS) attacks on other web services, causing loss of revenue and data.
Earlier this month, Google removed 60 games from the Play Store, which displayed pornographic advertisements, something strictly against Google’s policies for publishing apps on the Play Store. The gravity of the situation can perhaps be understood better by the fact that a lot of these games had already clocked over 1 million downloads by then, which included titles such as Five Nights Survival Craft, San Andreas City Craft, Exploration Lite: Wintercraft, Draw X-Men and Pixel Survival – Zombie Apocalypse.
One of the criticisms levelled against Google for the Play Store issues thus far was the that there was too much automation in monitoring and weeding out the bad and dangerous apps. At the Google I/O conference last year, the company claimed that 20,000 dedicated processors reviewed 500,000 apps a day for potential malware. The use of machine learning and improvements in the AI algorithms over time, means that this has only increased since then. When Google says that automation is “aiding the human reviewers in effectively detecting and enforcing on the problematic apps”, we believe this change of tack could perhaps be the most crucial bit.