Last week I spent time with a major technology manufacturer discussing the challenges of security in large enterprise environments. Besides the usual problems of platform uniformity and tool standardization, I was taken aback by the general attitude of the security spender-in-chief, which I would summarize as “We’ve built the greatest wall of all.”
It is an attitude I commonly see with folks who are responsible for the security posture of an enterprise, but have never worked as boots-on-the-ground in a SOC. Those analysts who have chased the often all too vague hints of trouble, searching the network for clues, and struggled to identify what exactly is/was going to happen. They understand the wall is nothing but a feature in the landscape, a manmade mountain that is hard to pass, and that there’s always ways over it, around it, or even through it. The bad guys always get in, and sooner or later, the time is going to come to do the cleanup.
Passive security defense is not enough
If the security spender-in-chief lacks this basic insight, the security of the organization is essentially non-existent as spending will be extremely lopsided deploying semi-static defenses such as firewalls, HTTP filters, IPS, and VPN terminators. The firewalls and VPNs tightly control who and what connections are allowed in or out, while the IPS and protocol filters decide which of those connections are deemed naughty and need to be terminated. While those are all very necessary security implements, they are not nearly sufficient. They are only part of the equation. Because when an application gets whacked, creds are brute-forced, or an employee goes rogue, the walls generally mean very little. At such point, the intruder has legitimate access to the network, at least as far as the static defenses are concerned.
At this point, the IPS and next-gen firewalls will force the intruder to be careful where to use application exploits – don’t cross site boundaries; and avoid downloading malware from known locations – though the cloud has solved that problem conveniently for the bad guys. So, when the SOC primarily focusses on building a strong wall, little time and money are left over to watch the soft chewy center. Remember, hackers work just as hard on understanding how modern-day defenses work as the good guys on deploying them. It’s indeed an arms race in the truest sense of the word.
Visibility tools let us actively hunt attackers
Defending the network, then, is not a spectator sport. We must engage the other side actively when they breach our defenses, which they eventually do. The static defenses discussed above are the barriers we have placed in our environment. And these barriers force the intruder out in the open, or to make moves otherwise unnecessary. That is when we stand the best chance to detect what is going on and hunt them down. How do we do that? By observing our soft chewy center.
Observation means leveraging the network team’s visibility solutions or spending some time and money to get your own. Regular readers will recognize I generally boil visibility down to a simple trifecta:
Now, these are just tools of visibility and all kind-of watch the same stuff from different angles. Most traditional security spenders will probably deploy a log analysis tool, but miss the important insight that objects may appear very differently from different angles. Thus no further telemetry is collected, which can complicate things at clean-up time. For example, an intruder who has gained legitimate credentials to a host may use the system to mine bitcoin, sending CPU through the roof, and showing lots of port 8333 evidence on the wire. Yet the logs will remain quiet. I cannot stress enough the importance of getting a good collection of vantage points on the “same stuff”.
When choosing visibility tools, you must ask yourself this question: does it record the collected telemetry in a reasonably original format? I call this “full fidelity”. Why? Because you don’t know today what you’ll need to be searching for tomorrow. Keep all the data.
We all expect all the logs that stream to our SIEM to be stored, line-by-line for later searching, recall, and correlation by the human correlator. But not so much for many packet, flow, and endpoint tools. All too often the data is summarized, bucketed, plotted, top-10’ed, or otherwise horribly mangled beyond repair, and more importantly, beyond the usefulness of a human hunter searching for evidence of what really has happened in the network. How deep did they get? How much data did they move? Did they get close to any important data? Where did it go? How long has this been going on? What else in my network is affected? Try answering those from a pie-chart.
In conclusion, note that I have not spoken about any fancy correlation, behavior detection, machine learning, user profiling, or any other analytical technique (all of which can help greatly, by the way!). For now, I’m simply asking that you please record the telemetry from your environment, broadly and comprehensively. And keep it for as long as you can. Invest not just in defenses, but equally invest in visibility. You will thank me later.
As any CISO will know, botnets are not a new phenomenon on the security landscape – rather, far from it. Many of us recall the early botnets Sub7 and Pretty Park – both Trojan/Backdoor-style remote control channels), both surfaced more than 15 years ago. Bots are nothing more than automated software that perform tasks. Based on the type of bot – its complexity and its use – it can perform with a varying ability of sophistication and autonomy. Consider, for instance, that bots are used by search engines to crawl data and myriad other very productive things. But they can also be used for harm – most commonly in the form of networks of bots (botnets) that enslave computers (called zombies) to do their dirty work. In the age of IoT, we have seen a massive spike in the use of malware to infect internet-connected devices and turn them into botnet zombies.
But the big threat of botnets moving into 2017 and well beyond is their ability to be used in massive DDoS attacks that prey on vulnerable IoT devices – most notably in the Mirai attacks that sparked global attention in late 2016. But what makes botnets and IoT such a potent combination in launching large DDoS strikes? And how did Mirai grow so quickly to be such a potent threat?
On the surface, the process of harvesting botnets is not necessarily an easy exercise. The internet is not particularly “homogeneous”; on it are lots of different computers and devices with different OS versions, connectivity, and patch levels. Not only are they all different in their makeup, they each have different vulnerabilities.
But the home routers, TV-tuner boxes, and thermostats that we see so commonly, make up the Internet-of-things. They are typically very homogeneous, are extremely hard to upgrade or path, and are therefore especially vulnerable to becoming compromised by botnets. What is more, some shipped with the a default password – this has been discussed at length in other readings and it is arguably one of the most direct fixes that will help to slow down this method of infiltration. Additionally, we have seen that the devices used in the major Mirai attacks are all installed on Fairpoint, Comcast and AT&T home networks, so they are easy for hackers to scan for and target.
As the world gains IoT devices at an alarming pace, the threat of large-scale DDoS attacks like Mirai and other potentially more damaging botnet will only increase. In fact, reports from Deloitte and others predict that Mirai attacks will see a pronounced increase in frequency and size in 2017 and beyond – a trend fed most likely from the release of source code for Mirai that was surfaced by Brian Krebs late last year. With new strains of botnets able to enslave a million or more connected devices, Mirai will be a fixture on the CISO’s radar for years to come.
There is no way to prevent DDoS attacks – and certainly no way to prevent a Mirai attack pushing the limits of 1 Tbps or more – but there are very real measures that organizations can take to plan for disruptions and mitigate against these threats. It is not just IoT devices that are vulnerable to being enslaved into a zombie botnet; laptops, desktops, servers, and even printers are common targets. Organizations, and even ISPs have a responsibility to keeping botnet zombies to a minimum to help prevent the next massive scale attack from hitting the Internet. And there are tangible steps you can take to hunt zombies.
The demands on enterprise CISOs across all markets have never been greater. Not only are they responsible for managing the data security considerations for organizations in the midst of global digital transformation, but they must be aware, educated, and constantly flexible in the midst of a constantly shifting threat landscape. DDoS attacks, internal threats, and other risks to business continuity are all concerns.
However, recent reports suggest that organizations are woefully underprepared to mitigate DDoS risks. Kaspersky Lab recently noted that 40 percent of businesses are unclear about how to protect themselves against targeted cyber attacks and DDoS – with many of them thinking that an IT partner will shield them from such threats. Further, another 30 percent think data center or infrastructure partners will protect them. While organizations all have different security considerations and resources, the numbers from this report add up to one undeniable conclusion: a large percent of businesses are completely incapable of dealing with an inbound DDoS attack.
Not only is this thinking misguided – it is downright dangerous. Recent sub-10Gbps DDoS attacks have managed to persist for multiple days, or even weeks. Such cyber attacks are capable of completely shutting down a medium-sized office location. CIOs and CISOs are responsible for the IT security considerations of their organizations, and their work drives to the very heart of a company’s long-term success and viability. If you have not consciously thought about your DDoS mitigation strategy, you are making a grave mistake.
The first step in truly becoming prepared for external threats, including DDoS, is to understand the scope and significance of the threats to your organization.
DDoS floods your infrastructure with traffic to prevent a service from working. With the recent prevalence of poorly protected IoT devices, many being drafted into DDoS zombies, attackers use massive numbers of connected devices from around the world and marshal them at the target, taking them offline for hours or days, or for those truly unprepared, even weeks. The Krebs DDoS in late 2016 was large and sustained enough that Akamai was forced to stop protecting the website. And while the motivations for attacks are varied, their impact on revenue, brand reputation, and IT credibility are enormous.
DDoS attacks like the one that made headlines last fall get the attention of the C-Suite. But how many CEOs know that experts predict DDoS attacks to grow in size, begin moving upstream, and also start targeting industrial control systems (ICS) and other targets that aren’t sufficiently prepared for DDoS mitigation? Knowing the threats and understanding how to keep the network from being disrupted is half the battle. Advocating for threat detection and prevention tools and technologies is vital.
As security software and systems are becoming increasingly better able to defend themselves, attackers resort to the crude and unsophisticated DDoS attack vector as an alternative. Although not as crafty as a true compromise, a DDoS attack can be equally destructive. Seriously disrupting communications to ICS systems controlling the power grid, industrial plants, and even DOT traffic control systems can have deadly consequences.
Waiting for your organization to be targeted or procrastinating security improvements due to lack of budget or man hours is a recipe for failure. The following are three steps to getting your infrastructure threat-capable.
1. Know your normal. The first step in detecting abnormal behaviors is to know and understand how your network functions day-to-day.
2. Understand your threat surface. Understand the different types of DDoS attacks, and evaluate how each of them could impact your network.
3. Plan for failure. Know what your incident response looks like in case of a security breach, and develop best practices to mitigate against network failures.
4. And finally: ask for help! If your network is in jeopardy of harm from bad actors, don’t wait to start the first conversation with qualified security specialists.
Getting your IT infrastructure battle-ready is easier than most CISOs think. If you have a network, it produces NetFlow, which allows for the visibility into your assets that you need to stave off bad actors. The threats to your organization have never been more real – and the risks will only grow for an organization that does not remain prepared and vigilant.
If you don’t want to be caught off guard this Christmas and get a lump of coal, think twice about entrusting your network security with the watchful eyes of the mythical elves. Spending the holidays in the office with a PR nightmare and a badly compromised network, customer data leaked, and a serious career derailment; just not our idea of a jolly good time.
The bad guys (and Grinches) are out there. One way or another some are going to get in. The longer they are in your network, the better the chances they get their hands on the juicy goods. This is not wisdom, it is fact. Thousands of systems, from servers to cell phones to printers – there are vulnerabilities, phishing emails clicked, compromised apps are installed, and even the most trusted of employees may turn rogue. If you are not working under the assumption that the inner sanctum of your network is already in the hands of a hacker, you are making a costly mistake.
And we get it: you have no red flags today, no evidence of compromise. So why worry? But keep in mind that “no evidence of network compromise” is NOT the same as “evidence of no network compromise”. The fact that you don’t know (yet) doesn’t mean it didn’t happen! Never seen Santa fly around in his sleigh? Does that mean he’s not there? Or does it mean you didn’t look hard enough?
At FlowTraq we understand the difference between guessing and knowing. One day a machine comes under the control of a bad guy. No matter how good the tricks to hide the hack on the system, the communication remains visible on the network. Because “under control” means the hacker must communicate with the hacked machine. These are often small communications; ICMP sessions with more packets flowing back than coming in; TCP streams on high ports every 6 hours on the dot; UDP packets, one out, one back, each time. Using full fidelity flow data, even the smallest command & control channel is recorded, no matter how big the network.
Sometimes we catch a compromise because the hacker’s IP address is known. Sometimes we catch a new backdoor service sprouting up in your network. Sometimes it is the stream of valuables leaving your network on their way to the hacker. And sometimes a connection just “looked odd”. FlowTraq let’s you know, and it lets you investigate what happened before and after.
And when you catch the bad guy: use FlowTraq’s full fidelity forensic history to find out who else on your network has been naughty in just the same way. Sweep up, clean them out. And if you must, make a list and check it twice. That’s Santa’s way. Don’t just trust it. Know it.
There’s no denying that distributed denial of service (DDoS) attacks have become more sophisticated and frequent in recent years. In fact, in a recent report from Verisign*, DDoS activity in Q3 2015 increased to the highest it has been in any quarter over the last two years. Also in Q3 2015, the average attack size increased to 7 Gbps, 27 percent higher than the previous quarter.
These new realities require a new class of solutions that are built for today’s threats and environments. To combat the increasing number of threats caused by DDoS attacks, FlowTraq and Verisign have partnered, allowing joint customers to protect their networks through fast detection and mitigation.
As traffic hits the network, FlowTraq uses advanced algorithms to quickly and effectively distinguish good traffic from harmful attacks. When a DDoS attack is detected, Flowtraq automatically alerts the Verisign DDoS Protection Service, passing on attack intel and giving customers the opportunity to request subsequent mitigation support in the Verisign cloud. Upon receiving an alert, Verisign can redirect internet traffic to their global network of DDoS Protection sites via the cloud, minimizing impact on the customer’s network.
To learn more about the partnership between FlowTraq and Verisign, download the Collaboration Between Flowtraq and Verisign for Hybrid DDoS Protection solution brief.
*Verisign, Distributed Denial of Service Trends Report Issue 2–3rd Quarter 2015
View the most up to date Distributed Denial of Service Trends Report Volume 4, Issue 2–2nd Quarter 2017 below:
Our new FlowTraq release (version Q1 2016) is updated with many improvements to reporting, alerting, and DDoS mitigation options, all features inspired by direct feedback from our existing users. This release supports easy creation of security event management reports, addition of notes to security events, manual approval or removal of mitigation actions, and null-routing/blackhole-routing of IPs under DDoS attack. Read the details below and contact us at email@example.com if you want to see a demo before you download the update.
Report on all or any subset of your alerts. You can now quickly generate a printable report on your alerts, including summary and averages information, as well as pie charts for Alerts by Partition, Traffic Groups, Severity and Types of Alerts. The selection of the alert and the time range to include use the existing filtering system, so your reports can be customized to your liking. The reports are printer-friendly and a direct link to the reports can be emailed for convenient sharing. Exporting to CSV has become an option for custom processing of alert data.
A host of improvements to the alerting system is included as well; you can now add notes and annotate an alert, including using @style addressing and filtering. For example, adding @soc to the alerts note field provides an @soc filter choice on the Alerts page. You can also filter on Notes > Annotated to see all alerts with notes. Alerts detail is also improved, featuring a printer-friendly alert format, and direct links to individual alerts makes it easier to email and export the details of each security event.
To improve operator control of plugin/mitigation actions, we added the ability to place actions “on hold” as an initial state. This allows the mitigation action to be queued up, but not taken, until the operator decides the action is approved. This allows for a progressive mitigation scenario where lower-volume DDoS attacks require operator intervention, while higher-volume DDoS attacks are automatically remediated. It is also possible to revoke plugin actions, and restart cancelled plugin actions.
Alert emails now support most types of email servers and authentication methods. More detailed information is sent in every email, including a URL that gives the operator a quick link back to the alert in FlowTraq.
A free BGP null-route plugin is now available that offers a lower-cost alternative for mitigating inbound DDoS attacks. Especially in data center and ISP environments, the best DDoS defense often is simply removing the target IP from the routed space, allowing traffic to other customers to continue without interruption.
We have also added a new policy detection type for inbound packet rates on any port. The typical scenario allows the operator to set lower, generic trigger points, whitelisting only those service ports that are known to be in common use.
Finally, we added performance and usability improvements to many parts of the FlowTraq user interface.
Want to try a demo before you upgrade? Contact us today at firstname.lastname@example.org.
We joined Gigamon on Wednesday for their NYSE closing bell ceremony, which marked the start of their #WeFightSmart initiative. The reasons this is a smart fight are technical in nature. Let me explain why we should all pay attention.
The number of connected devices, as well as the volume of data we consume, continues to grow every year. This means the number of bits, packets, and connections on networks is continuing its exponential rise. The more bits there are to secure, the easier it is for dedicated cyber criminals to hide. Here’s why:
1. A smart adversary understands stealth. Broad sweeps of IP address space and scanning for vulnerabilities across subnet boundaries are a sure way to get noticed – and then locked out. But careful use of compromised systems allows the attacker to gain a bigger presence in your network without being easily noticed.
2. Once inside, a smart attacker will need very little bandwidth to penetrate further and gain an understanding of your network’s most valuable assets, while simultaneously discovering your defensive blind spots. Taking only what is valuable reduces the chances of being noticed.
3. Now that the perpetrator knows your network and your defenses, and is cloaked as a legitimate user, careful exfiltration of sensitive assets becomes almost trivial.
The point is that there is a lot of legitimate traffic to hide in!
When I landed my first job securing networks, I picked through individual packets to find the communications that were suspicious, tracking movements by hackers as they went about their work. Subnets were small, uplinks were diminutive, and traffic volumes next to nothing. This has changed dramatically.
Fighting smart means figuring out what data is worth analyzing, because the bottom-up approach is no longer feasible. Instead we must shift our focus to providing the smart defender with the right level of detail at every step, so they can follow their instincts and focus in on attacker movement. And this forms the essence of cyber hunting, which is the art of defending your network by actively hunting down those who pose the biggest threat.
The two pillars of effective defense through cyber hunting are focus, and visibility:
FOCUS – We recognize that the human brain is still the best anomaly detector ever built. But the data must be in the right form. Large swaths of packet traces are useless; instead we must focus. To see the forest, not just the trees, we must automate what can easily be automated. The value of full-fidelity, 1-on-1 NetFlow/sFlow/IPFIX has long been a key asset to the cyber defender. Adding packet-level metadata allows for a substantial improvement of analyst effectiveness. The longer we can enable the analyst to work at the flow level before they must resort to packet-level detail, the faster they will be able to work.
VISIBILITY – It is useless to try to defend a network that you cannot see into. If your telemetry stops at the border, you might as well stop trying. Building an infrastructure that allows the analyst to collect data at every network intersection enables detection and investigation of lateral attacker movement, reconnaissance, and data leakage.
Putting the right tools in place for visibility and focus saves the analyst much time. And time is the most valuable asset a cyber defender has. More threats investigated means more bad actors stopped in their tracks.
Cyber security is no longer about waiting for the light to go red. Instead it is about actively seeking and disabling threats. Gigamon has built the very foundation of this, but a foundation that is both necessary and invaluable. Leveraging their data collection infrastructure, we can focus the analyst to fight smart, and give them the levels of visibility they need to zero in on the threat, without being overwhelmed.
In a recent study, we discovered that over 53% of all FlowTraq users also use Splunk in some capacity. The majority indicates they stream FlowTraq security alerts generated from their flow data into Splunk, but less than half have actually installed the FlowTraq Splunk App.
We think this should change, and here’s why.
FlowTraq provides powerful alerts on network events – such as unwanted data leakage, DDoS attacks, and botnet traffic – using any of the regular flow formats (sFlow, NetFlow, or IPFIX). These alerts can be streamed in any number of ways, but most commonly they are sent to tools like Splunk, which is great, because analyst searches can quickly correlate network events with security detections in FlowTraq. But doing the reverse isn’t quite as easy. Not all security events – e.g., failed login, failed privilege escalation attempt, or unexpected software condition – may be detectable in network traffic and it’s not immediately straightforward to correlate such host-based events to possible evidence on the network.
The FlowTraq for Splunk App was designed to give the analyst this very power.
By presenting the familiar Splunk “search” interface together with a graph of the network traffic, network peers, and other relevant traffic details, the analyst can quickly correlate host-based events to traffic records stored in FlowTraq – in fact, all searching and time navigation is honored in the view. By way of example, as hundreds of failed login attempts are discovered for a host, a quick search would reveal both the pattern of network traffic and all the system events associated with it.
It works both ways. Say FlowTraq detects a possible data leak from a system in your network, and a syslog event was sent to Splunk. Upon investigating this FlowTraq alert, the analyst can broaden the scope to discover other relevant security events, such as a successful remote login and subsequent access to data resources. The successful login would immediately lead to evidence in the flow data of earlier connection attempts to other systems in your network.
This all starts to build the full picture of what’s going on: a remote attacker has gained valid credentials through a phishing attempt, and has slowly infiltrated dozens of systems in your network. Traces of successful logins and the many network connection attempts are brought together to provide the incriminating evidence. The data leak is discovered and stopped, and all other compromised accounts cleaned up.
Show me a security analyst who doesn’t want fast answers.
If your security operations center is under attack right now, or you want to perform immediate queries on stored network traffic flow records to figure out the source or effects of an attack, performance is essential.
Network traffic flow analysis and security detection can be most effective – and extremely fast – when set up in a parallel processing server environment. Fundamentally, FlowTraq’s architecture allows you to handle an unlimited incoming flow rate by adapting the architecture to the available environment.
If the underlying server architecture is not properly configured, a security analyst who uses FlowTraq may experience longer query times and sluggish performance. Most of these limitations are a direct result of the realities of modern hardware platforms – and it helps to be aware of them so you can avoid running into them. Here are some limits you may be faced with when building your FlowTraq cluster and how to work around those limits:
25K Flows Per Second (fps)
FlowTraq recommends that a server with 8 cores and 64GB of RAM handle no more than 25,000 flow updates per second of peacetime traffic. Although modern hardware is capable of handling more flow updates (single servers are reportedly capable of 10x that level), there comes a point where an analyst feels the system becomes “too slow” for analysis tasks. Specifically, as forensic recall into a flow history is limited by the IO subsystem, 25Kfps is a reasonable rate for a medium-grade server. The easiest way to handle a higher flow update rate is to build a cluster of FlowTraq systems, each capable of 25Kfps.
Since unpacking and processing flow records takes a fixed amount of CPU time, there is a limit where an individual thread running on a single CPU core can no longer keep up – and it starts dropping records. This limit on average processors lies around 100Kfps, ranging toward 200Kfps for more powerful server processors. Since the operating system needs time to receive a NetFlow packet and put it in a queue to the FlowTraq application, it can be difficult to avoid this limit. The easiest trick is to open multiple ports for flow ingress and direct different exporters toward different ports. Each port will be handled by a separate thread, avoiding packet drops.
When a single FlowTraq portal is used to receive and re-distribute flow records to workers, it functions as a smart load balancer for the FlowTraq cluster. At 800,000 flow update records per second, the ingress and egress of flow records starts to approach 1Gbps. This means that a single 1Gbps network card would saturate and records would start to drop. To avoid this bottleneck, you can either use multiple 1Gbps network cards, or move to 10Gbps networking hardware.
When using 10 or 12 NetFlow ingress ports for a FlowTraq portal, some inherent limits in computer hardware start to become apparent. Contention on IO resources for the network hardware, as well as multiple CPU cores attempting to access RAM put a fuzzy limit on the maximum amount of flow a single FlowTraq portal can distribute. This limit lies between 1.2M and 3Mfps for the most modern hardware. Working around this limitation is straightforward: use multiple load balancing portals, and this limit disappears.
At 2,000,000 flow updates per second moving through the network cards, we observe a data rate limit in the PCI2 communication system. Adding network hardware does not alleviate this limit, as the PCI bus is the common shared resource for all IO, and this limit is unavoidable. Thankfully, the PCI3 standard is pretty ubiquitous right now – in our lab, we zoom right past the 2Mfps limit on a single distribution portal.
At 3,200,000 flow updates per second we have reached the limit of what a single-level FlowTraq cluster using average hardware is recommended to handle. At full fidelity, NetFlow reporting a 3.2Mfps flow rate represents over 3TB per second of network traffic flowing through your network.
(That’s a lot.)
Breaking through this limit and handling a higher flow rate with FlowTraq is actually rather trivial: add a layer to your cluster, or consider the use of multiple distribution portals, possibly geographically distributed, each with their own analysis worker cluster.
The bottom line
A single pane-of-glass view of very large networks is extremely powerful. And FlowTraq offers you this view, regardless of the size of your network.
Today we take advantage of multi-core processors, optimized data storage and clustering to provide the highest-performance, most scalable NetFlow processing available. As hardware gets faster, we will see real-time detection speed continue to fly, as well as the ability to run fast queries over longer periods of stored network traffic records. FlowTraq remains scalable, regardless of how large your traffic volumes grow, especially when provisioned with the underlying clustered servers as recommended.
Want to learn more? Get in touch or take FlowTraq for a test drive with a 14-day free trial.
In 2013, Richard Stotts and Scot Lippenholz from Booz Allen Hamilton wrote “Cyber Hunting: Proactively Track Anomalies to Inform Risk Decisions,” an article that captures the essence of the seismic shift that is needed to keep sensitive data on the correct side of your network border. Far greater is their reality than they probably understood when they initially wrote the article.
At FlowTraq, we’re frequently invited to help equip select individuals – chosen for their significant knowledge of networks and cyber threats – to be part of exclusive “cyber hunting” teams.
These are not the operators configuring switches and routers, and they are also not the analysts that pore over collected logs in SIEMs. These are small, specialized teams that actively seek things that are amiss, track intruders, and catalog the actions of known enemies.
We call them the “network cowboys.”
They spend their time chasing whatever piques their interest through complex networks, often comprising dozens of locations, and tens of thousands of systems. Their biggest discoveries are preceded by a simple: “Now that’s interesting…” And their biggest victories are never known, because the crippling data exfiltration did NOT happen. More and more organizations are coming to realize that a small and dedicated team of network cowboys may be their best bet in preventing the embarrassing data leaks that can be found in the newspapers every morning.
What sets the network cowboys and cyber hunters apart from regular incident responders is that they are given full rein to investigate ANY network activity – incident or no incident. They’re typically not constrained by playbooks or extensive procedure sets, and they’re not bogged down by an ever-growing inbox of “to-do” and “must-do” tasks. The experience of these individuals typically allows them to gain a deep understanding of traffic patterns, what is normal, and what is curiously abnormal. This is a skill that is acquired through years of incident investigation and experience. It cannot be learned in a classroom. And that’s why this is hard.
Hunt teams are often struggling for sufficient access to the telemetry they need to chase the threat and fully understand it. And passive telemetry is most valuable; above all, they realize that an intelligent adversary cannot be stopped trivially. And every move to block an adversary will educate them on your capabilities, so act carefully. It is very important to pick the right weapons before you pull the trigger. Stealth is key, which means that much of the tracking data is gathered quietly and continuously. This is where FlowTraq has been a powerful ally.
NetFlow is ubiquitously available. All routers, switches, firewalls, and load balancers can export it. And few attackers would find it strange to see large streams of it crossing a network. Using FlowTraq’s scalable framework, network cowboys can collect a 1-on-1, un-sampled, forensically accurate track record of traffic across all network planes – which is then rapidly searched during the hunt. When tracking an adversary, speed is of the essence – therefore, NetFlow is the data source of choice. Answers on week-old data come back in seconds, months-old in minutes. Only when absolutely necessary will a full packet capture trace be needed, and the analyst will know exactly what to query a fullcap engine for when the time comes. The details are in the NetFlow.
Often single communications – small, periodic, or curiously timed – enable the analyst to discover stepping stones, command&control channels, and possible data exfiltrations. What sets the cowboy apart from your NSOC operators is that the cowboy chases the UNKNOWN threats, while the NSOC monitors for known malware, known C&C channels, and other known viral information. The communications that the cyber hunter is looking for typically do not set off any alarms anywhere – except for the mental alarms in the cyber hunter. I must therefore stress that you cannot simply replace your NSOC. Cyber hunters are a distinct and separate capability that you must consider deploying.
But even with the right mandate, telemetry, and tools, the job is not an easy one. As any defender knows: for every action there is a reaction. Blocking traffic, moving assets or sensitive data, etc. are only effective for a short time. The adversary will realize a defense was put in play, and will react accordingly. New vectors WILL be tried. If you build a wall, your adversary will figure out how to crawl under, climb over, or run around it. If you’re not continually watching, you surely will fall victim. And even when you are watching, you are never truly safe. But with a good team of network cowboys, you’re just that much safer.
Cyber hunting is no safari. Sometimes the tiger wins.