Bruce Schneier is an internationally renowned security technologist. An author of 13 books including Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World, his newsletter Crypto-Gram and his blog Schneier on Security are read by over 250,000 people. He is a fellow at the Berkman Klein Center for Internet and Society at Harvard University; a lecturer in public policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, Access Now and the Tor Project. He is also a special adviser to IBM Security and the chief technology officer at IBM Resilient.

In an interview, Schneier speaks about some of the biggest online security threats that individuals, companies and governments will face in 2018; how these threats have ballooned because of the IoT (Internet of Things); learnings from the Cambridge Analytica-Facebook data compromise issue; Surveillance Capitalism; and his thoughts on artificial intelligence (AI) and cyberwar among other things. Edited excerpts:

What, according to you, are some of the biggest online security threats that individuals, companies and governments will face in 2018?

IoT gives computers the ability to affect the world in a direct physical manner. Previously, the threats were largely confined to data—both theft and misuse. Today, the threats are more about life and property. There’s a big difference between your spreadsheet crashing and you losing your data, and your car crashing and you losing your life. It could be the same computer chip, the same operating system, the same applications software, and the same attack code. But because the computer is being used differently, the results are wildly different.

Have these threats ballooned because of IoT?

Yes. It’s not just that IoT devices can affect the physical world. These devices are often cheaply designed and sold, which means that their security is poorly designed. And the sheer number of them is another risk. It’s a perfect storm: computers are becoming more numerous and capable just as their security is failing.

What are the learnings from the Cambridge Analytica-Facebook data compromise issue?

In my view, that incident demonstrates that the legal uses of our data are far more dangerous than the illegal uses. It wasn’t a data compromise by any reasonable definition of the term. No one hacked into Facebook’s servers. No technical vulnerabilities were exploited. No data was stolen. The data was collected by a Facebook partner, and then given to Cambridge Analytica in breach of a contract. Facebook’s surveillance business model is to blame here, and if we have any hope of fixing this, we need to change that business model.

How vulnerable are elections to manipulation in this digital age? What can be done to address the issue from a cybersecurity point of view?

You ask a complicated question, and the short answer is “very". I believe elections are vulnerable at all phases—voter registration and voter rolls, voting itself, and vote tabulation. All of these processes are computerized and online, which puts them all at risk. The solutions are straightforward—more security and paper back-up wherever possible.

Will AI (especially machine learning and deep learning) help in tackling cybercrime, given that hackers too have access to these smart toolkits?

Of course, AI will help in tackling cybercrime. And, of course, AI will help cybercriminals too. The real question is this: which side does the technology help more? We don’t know. AI is a non-linear, tightly coupled and a very surprising technology. This makes its future very hard to predict.

Do you believe that nation states are heading towards a cyberwar, given that AI-powered weapons are the order of the day with Google supporting Pentagon’s Project Maven being a case in point?

In my opinion, Project Maven is about the worst possible example of a project that is leading us to cyberwar. That programme is an attempt to use machine learning to classify intelligence photos and videos. But leaving that aside, I am worried about the potential for cyberwar. I am worried that nations are conducting offensive actions in cyberspace against each other and penetrating each others’ critical infrastructure in an attempt to “prepare the battlefield" for future hostilities. I am much less worried about AI-powered weapons as I think that there is no less danger from human-powered internet weapons.

Do you believe governments, other than Europe with its General Data Protection Regulation, are doing enough to address the issue of cybersecurity and privacy with stringent cyber-regulations?

No, and I believe they’re not even trying. Most governments still believe that the technology sector can remain unregulated, because they still believe that insecurities don’t matter. Now they do, and governments need to wise up and face that fact.

You believe that Surveillance Capitalism, a term popularized by Harvard Business School professor Shoshana Zuboff, is gathering momentum. If so, what are the implications?

Surveillance Capitalism, in my opinion, is the fundamental business model of the internet. It gathered momentum in the late 1990s and early 2000s, and is firmly entrenched today. (The term, Surveillance Capitalism, was first introduced by John Bellamy Foster and Robert McChesney in the magazine Monthly Review in 2014 and later popularized by academic Shoshana Zuboff. It denotes a type of capitalism that monetizes data acquired through surveillance). We are all under constant ubiquitous surveillance by the computers we use and the technology companies we interact with. This includes our smartphones, our personal computers and all the embedded computers we interact with as we go about our day. Detailed information about us is being constantly collected by companies we often don’t know and are being used to psychologically manipulate us against our interests. The polite name of that is “marketing", but it’s now so personalized that it’s gone beyond the traditional definition of that term.

Close