The security professional’s best friend: Artificial Intelligence

Date: Tue, 02/05/2019 - 19:10

There used to be a simple formula for a security debate: hit them with a round up of the year’s worst horror stories – the latest hacks, viruses and how much they cost business – then introduce the latest most sophisticated technology solutions, designed put all that into the past

The security professional’s best friend: Artificial Intelligence

Roark Pollock, Ziften’s Marketing SVP

Image credited to NetEvents

The recent NetEvents EMEA Press Spotlight Round Table discussion – The Security Professional’s Best Friend: Artificial Intelligence – added greater intelligence to the mix. It was a combination of Artificial Intelligence (AI) and human intelligence – in the form of greater realism, more recognition of the limits to what is possible.
Ovum Principle Analyst, Rik Turner, discussed the challenges, the changes and the tech responses.   In the 1990s, he explained, everyone was talking about prevention: “preventing the bad guys getting in, preventing malware from penetrating their networks. Their infrastructure could be safe. They could prevent all of those bad things from happening.”
Instead, over the last two decades, we have moved towards a new stance. The vast majority of vendors and practitioners now admit that the best we can do at the moment is to detect and mitigate: “detect once someone’s in, move to mitigate as quickly as possible, potentially do some damage limitation, do some quarantining so they can’t run amok within your infrastructure, and then subsequently to remediate, clean them up, get them out, and start again. Until the next breach”. That, he suggested, is really a defeat for the cyber-security industry. “It reminds me a little bit of the people defending the city of Constantinople when it was still capital of the Byzantine Empire… gradually the siege made it through the first outer walls, and drove them into the inner walls, until eventually they breached the whole thing. Notice that we use the term breach. We’ve adopted it from the world of siege warfare.”
What else has changed? The amount of malware being successfully stopped by anti-virus signatures continues to fall. In 2014 Symantec, in The Wall Street Journal,was talking about 45% success: “I now think it’s between 20 and 30%, not much more, across the industry”. Then of course there is the rise of criminal gangs, hacktivists and state-sponsored malware actors with unlimited resources to play with – not to mention the availability of off-the-shelf hacking kits on the Dark Web. What’s more: “The Cloud: that makes it so much easier to go out, rent a few processors from Amazon, test-drive your new exploit before you’ve even launched it, and make sure it works.”
Finally, it is not just volume of revealed vulnerabilities, it is the sheer velocity of their exploitation: “People in security always talk about the needle in the haystack. It’s a horrible cliché, but it’s true. [Actually, later in the discussion someone amended this to “its more like finding a needle in a needlestack”]. In this vulnerability space there’s this vast number of vulnerabilities [over 15,000 last year] being published but, by the same token, how do you know which ones are actually going to be exploited? …Why waste your time worrying about all the others when there are only 1.9% being exploited?... Also the speed at which the ones going to be exploited are exploited is ever-greater. So you’ve got less time to decide which ones you need to focus on.”
Turning from prevent to detect and mitigate, he outlined current approaches. First sandboxing: “You’d rolled a big box in, put it on your network. Anything that looked vaguely dodgy that you could not actually guarantee was malware, you could put it in there, carry out a controlled explosion, and check whether or not it actually was malicious. It was very fashionable for a while, they sold a hell of a lot. Then, the malware guys started writing malware that knew it was in a sandbox, and played dead, effectively, or played good, however you want to put it, so that it was released out into the wilds…”
‘Knowing it is in a sandbox’ sounds like a great example of artificial intelligence – but in the wrong hands. His example of AI learning in the right hands is User and Entity Behavioural Analysis (UBEA), a system that looks at everything done one the network by each user or system, and it learns what is “normal” behaviour. It can then flag a warning about any out-of-the-ordinary behaviour: “It might be your email server suddenly starts to download the entire payroll database. That’s an entity acting a bit dodgy”.
Another information-based response is called “threat intelligence”. It starts with a ton of data and then intelligently narrows it down. He gave the example of a bankbranch wanting details of every bank robber alive today, then filtering out those that are currently in prison, and then reducing the field further by singling out those living conveniently close to the branch.
Both these information based approaches are best served by AI machine learning that sorts through loads data looking for recognisable patterns – just the sort of data-crunching that becomes incredibly tedious for an intelligent human. Add to that the pressures already mentioned – volume and velocity of vulnerabilities – and this is the sort of process ideally suited to automated AI.
AI’s immediate value is that it narrows down the search within a mine of data: like a magnifying glass focused on the most likely area for finding that needle in the needlestack.But can it extrapolate that analysis into useable predictions? “The promise of artificial intelligence down the road is that it might actually get us to some data science where we can start making some realistic predictions about what is the most likely attack on you, what is the most likely vulnerability to be used against you, and those kinds of things. Now, I’m not suggesting for a moment that that’s where we are today, but that’s what people are talking about, and what they are suggesting is possible”.
Possible, but is it imminent, or even likely? Jan Guldentops –BA Test Labs’ Director, with some twenty years hands-on security experience on “both sides of the wall” said; “Artificial Intelligence is the next bullshit term… It’s IoT, it’s cloud, it’s something that broad that we understand it, but we don’t really know what it’s about? We’ve been doing machine learning in the security industry for 15 years. Your anti-spam is based on machine learning… The second thing we have to remember is, it’s a tool. It is not magic… we’re 20 years, 30 years away from real artificial intelligence.”
Roark Pollock, Ziften’s Marketing SVP, agreed that AI techniques have long played a major part in cybersecurity, but the difference is that the heavy number crunching – the level that once required supercomputers to identify attack signatures – can now take place at the edge. “I can now run artificial intelligence models or machine learning models on those end-points without bringing that device to its knees. I can run machine learning as a security tool on your end-point, your Apple, your servers, your cloud virtual machines, and it only takes up less than 1% of the device. It doesn’t kill the device from a processing standpoint.”
He pointed out that there was no question that signature-based detection worked, it was just that it was a slow process to identify and broadcast these signatures. Meanwhile, other protection was needed to cover that gap. Another sensitive area where AI had a role was aroundthe big increase in file-less attacks, which do not hang around in storage waiting to be discovered, but go straight to memory, without any user action.
Secrutiny Founder and CEO, Simon Crumplin said “the reality is, not all threats are risks to organisations.  What surprises me about our industry is, we spend so much time talking about malware… But actually, to materially breach an organisation, malware’s just the start, and we spend all our time and focus around this - I call it threat propaganda.”
His subsequent argument reminded me of the story of the two blistered barefoot sages trying to solve the world’s problems: the one suggested killing all the cows in the world so that the land surface could be covered with leather, making it more comfortable for walking. The second sage said he’d rather kill just one cow to make leather sandals for their feet. Crumplin suggested that, for all the talk about better technology: “to determine their risk and their risk appetite with the business is the primary thing organisations have got to do and understand. They can then make investments that are meaningful to mitigate just those risks.”
A more extreme version of this novel approach came later from the floor: “What we’ve seen over the last three years is a number of research studies highlighting the fact that security breaches, security vulnerabilities, database theft, credential threat, credit card loss, has zero impact on the bottom line of companies, has zero impact on share price.... No one cares about IT security because it has no impact on the business at all. In fact, companies who suffer major breaches, and get substantial amounts of press coverage because of it, actually grow as a result of that business”.
This was clearly an overstatement because, as Guldentops and Pollock pointed out, the loss of trust and reputation is bad damage that is not so easy to quantify and dismiss. But it did make a very interesting point. It marked the sort of radical re-thinking that is most needed when technology reaches an impass or crisis point.Is it not time we forgot about leathering the globe and took a closer look at our feet?
Jan Guldentops suggested another important role for technology. Whereas people sometimes cry out for more security specialists, what is really needed is more automation: “We need to automate simple things, like configuration management. Like log analysis. Let’s not call it AI yet.” This was made more urgent by the pressures for compliance, as with GDPR where: “First of all, you need to report a data leak within 72 hours. Of my customers, maybe 15% is capable of doing that.” Later commenting: “Code compliance is a perfect area for machine learning and natural language processing”.
Pollock took up the latest scary figures for the number of security alerts, saying that this was partly a consequence of the shift from prevention to detection: “One of the reasons we have so many alerts these days is, we’ve got a lot better at identifying issues after they happen… moving from prevention to detection and response. If we’re finding things that are going on, we’re finding more alerts.”
The audience Q&A that followed began with a reminder that, in the search for new solutions, we should not forget the traditional perimeter solutions that are still doing a good job. Machine learning is being vaunted all over the place, but there are only so many attack paths actually being used: “there are only so many low-hanging fruits, and criminal hackers are into minimax – having maximum result with minimum effort.” So every old perimeter hurdle still plays some part in deterring them.
Otherwise, let’s end with another radical observation from Guldentops that truly reflected what we had heard in this session: “One of the big evolutions between the ‘90s and now is that the security industry has become more modest. In the ‘90s, you could have someone on stage arrogantly saying they had the solution for everything…”

To see the video, please visit:

valorar este articulo:
Your rating: None

Post new comment

Datos Comentario
The content of this field is kept private and will not be shown publicly.
Datos Comentario
Datos Comentario