Last week I spent time with a major technology manufacturer discussing the challenges of security in large enterprise environments. Besides the usual problems of platform uniformity and tool standardization, I was taken aback by the general attitude of the security spender-in-chief, which I would summarize as “We’ve built the greatest wall of all.”
It is an attitude I commonly see with folks who are responsible for the security posture of an enterprise, but have never worked as boots-on-the-ground in a SOC. Those analysts who have chased the often all too vague hints of trouble, searching the network for clues, and struggled to identify what exactly is/was going to happen. They understand the wall is nothing but a feature in the landscape, a manmade mountain that is hard to pass, and that there’s always ways over it, around it, or even through it. The bad guys always get in, and sooner or later, the time is going to come to do the cleanup.
Passive security defense is not enough
If the security spender-in-chief lacks this basic insight, the security of the organization is essentially non-existent as spending will be extremely lopsided deploying semi-static defenses such as firewalls, HTTP filters, IPS, and VPN terminators. The firewalls and VPNs tightly control who and what connections are allowed in or out, while the IPS and protocol filters decide which of those connections are deemed naughty and need to be terminated. While those are all very necessary security implements, they are not nearly sufficient. They are only part of the equation. Because when an application gets whacked, creds are brute-forced, or an employee goes rogue, the walls generally mean very little. At such point, the intruder has legitimate access to the network, at least as far as the static defenses are concerned.
At this point, the IPS and next-gen firewalls will force the intruder to be careful where to use application exploits – don’t cross site boundaries; and avoid downloading malware from known locations – though the cloud has solved that problem conveniently for the bad guys. So, when the SOC primarily focusses on building a strong wall, little time and money are left over to watch the soft chewy center. Remember, hackers work just as hard on understanding how modern-day defenses work as the good guys on deploying them. It’s indeed an arms race in the truest sense of the word.
Visibility tools let us actively hunt attackers
Defending the network, then, is not a spectator sport. We must engage the other side actively when they breach our defenses, which they eventually do. The static defenses discussed above are the barriers we have placed in our environment. And these barriers force the intruder out in the open, or to make moves otherwise unnecessary. That is when we stand the best chance to detect what is going on and hunt them down. How do we do that? By observing our soft chewy center.
Observation means leveraging the network team’s visibility solutions or spending some time and money to get your own. Regular readers will recognize I generally boil visibility down to a simple trifecta:
Now, these are just tools of visibility and all kind-of watch the same stuff from different angles. Most traditional security spenders will probably deploy a log analysis tool, but miss the important insight that objects may appear very differently from different angles. Thus no further telemetry is collected, which can complicate things at clean-up time. For example, an intruder who has gained legitimate credentials to a host may use the system to mine bitcoin, sending CPU through the roof, and showing lots of port 8333 evidence on the wire. Yet the logs will remain quiet. I cannot stress enough the importance of getting a good collection of vantage points on the “same stuff”.
When choosing visibility tools, you must ask yourself this question: does it record the collected telemetry in a reasonably original format? I call this “full fidelity”. Why? Because you don’t know today what you’ll need to be searching for tomorrow. Keep all the data.
We all expect all the logs that stream to our SIEM to be stored, line-by-line for later searching, recall, and correlation by the human correlator. But not so much for many packet, flow, and endpoint tools. All too often the data is summarized, bucketed, plotted, top-10’ed, or otherwise horribly mangled beyond repair, and more importantly, beyond the usefulness of a human hunter searching for evidence of what really has happened in the network. How deep did they get? How much data did they move? Did they get close to any important data? Where did it go? How long has this been going on? What else in my network is affected? Try answering those from a pie-chart.
In conclusion, note that I have not spoken about any fancy correlation, behavior detection, machine learning, user profiling, or any other analytical technique (all of which can help greatly, by the way!). For now, I’m simply asking that you please record the telemetry from your environment, broadly and comprehensively. And keep it for as long as you can. Invest not just in defenses, but equally invest in visibility. You will thank me later.