Tech firms tap AI tools to stem spread of terror content, spam
Google, Twitter, Facebook and YouTube deploy automated systems to keep tabs on content
Global and Indian users are still smarting from the Cambridge Analytica incident, in which data of nearly 87 million Facebook users was used deceptively to manipulate the US elections.
It has also been pointed out that Google has a gigantic data repository that can be misused because it can connect the dots with machine intelligence.
Meanwhile, technology companies like Facebook, Google and Twitter are using the same technology advancements, specifically artificial intelligence (AI) tools like machine learning and deep learning, to stem the spread of spam, adult and terrorist content.
While Facebook said it took action on 1.9 million pieces of Islamic State (IS) and Al-Qaeda content in the first quarter of 2018, Twitter said on 5 April that it had suspended over 1.2 million accounts for terrorist content since August 2015.
On 23 April, Google-owned YouTube said it had removed over 8 million videos during October-December 2017, of which 6.7 million were first flagged for review by machines rather than humans.
Facebook has a counterterrorism team of 200 people—it was 150 in 2017. According to a 23 April note by Monika Bickert, vice-president of global policy management and Brian Fishman, global head of counterterrorism policy, “the challenge of terrorism online isn’t new (but) has grown increasingly urgent as digital platforms become central to our lives”.
About 99% of the IS and Al-Qaeda-related terror content the company removes is content it detects “before anyone in our community has flagged it to us, and in some cases, before it goes live on the site”.
Facebook does this primarily through the use of automated systems like photo and video matching and text-based machine learning. Once the company is aware of a piece of terror content, it also removes “83% of subsequently uploaded copies within one hour of upload”.
Similarly, in its 12th biannual Twitter Transparency Report released on 5 April, the micro-blogging site said that between 1 August 2015 and 31 December 2017, it had suspended a total of 1,210,357 accounts for violations related to the promotion of terrorism.
Further, during the reporting period of July 1, 2017 through December 31, 2017, a total of 274,460 accounts were permanently suspended for violations related to the promotion of terrorism of which 93% were flagged by internal, proprietary tools, and 74% of those accounts were suspended before their first tweet.
Google, on its part, introduced machine learning flagging in June 2017.
“Now more than half of the videos (on YouTube) we remove for violent extremism has fewer than 10 views,” it said on 23 April.
Machines, the blog says, allows YouTube “to flag content for review at scale, helping us remove millions of violative videos before they are ever viewed”. Google says it remains committed to bringing the total number of people working to address violative content to 10,000 across the company by the end of 2018.
Both Facebook and Google partner with academics, government partners, and non-government organizations (NGOs) to gather intelligence in the form of photos, videos, text, etc.
Yet, most of today’s AI-powered tools and technologies aren’t smart enough to know if someone is exploiting them for activities that are illegal or unethical, according to a 16 April note by Forrester Research.
“If an inference engine that was focused on stopping malware was corrupted to see every internal connection to a specific website as malicious, that same system could be flipped to see all connections to any website as having always been malicious,” caution Forrester analysts, Chase Cunningham and Joseph Blankenship.
They add that such an exploit could effectively cripple a network, and there would be no hope of stopping the override, as the AI system would process the decision and inferences faster than the repair systems could fix it.
“As you seek to protect AI technologies from cybercriminals, give equal attention to the confidentiality and integrity of the data,” the researchers urge.
On their part, technology companies acknowledge that while the use of AI against terrorism is gathering momentum, they also need to co-opt trained human experts and collaborate with each other to share insights and decipher patterns.
Towards this end, Facebook, Microsoft, Twitter, and YouTube launched the Global Internet Forum to Counter Terrorism (GIFCT) in December 2016. The latter has brought together more than 50 technology companies over the course of three international working sessions.
GIFCT has a shared industry database of “hashes” (unique digital fingerprints of terrorist media).
- What to expect from Intel 9th gen chipsets
- LG G7+ ThinQ: A flagship phone that is just short of being a winner
- Psst, be careful, someone may be secretly recording your telephonic conversation
- A Pie for everyone: Android Pie Go edition announced
- You can experience Alexa and Cortana talk to each other if you’ve an Echo device
Editor's Picks »
- At opposition meet, Rahul Gandhi targets govt over Rafale deal
- News in numbers: Trump’s attacks on the press ‘dangerous to the lifeblood of democracy’, says NYT
- Former Religare CEO Shachindra Nath raises about ₹1,000 crore for NBFC
- PE industry lobbies CCI for anti-trust exemptions
- Opinion | Turkey flashes warning sign to Asia
- Recent rise in trade deficit is not due to the oil prices
- Safeguard duty proposal has deepened uncertainty in the solar energy sector
- Fortis Healthcare: What now, after IHH entry and June quarter loss?
- Weak Q1 for Amara Raja but investors pin hopes on softening lead prices
- IDBI Bank Q1 results show how expensive it is for LIC