menu
FlowTraq > Articles by: Dr. Vincent Berk

Author: Dr. Vincent Berk

The Rise of the IoT Botnets and Mirai Attacks

Dr. Vincent Berk
By | April 26, 2017


Facebooktwitterlinkedin

As any CISO will know, botnets are not a new phenomenon on the security landscape – rather, far from it. Many of us recall the early botnets Sub7 and Pretty Park – both Trojan/Backdoor-style remote control channels), both surfaced more than 15 years ago. Bots are nothing more than automated software that perform tasks. Based on the type of bot – its complexity and its use – it can perform with a varying ability of sophistication and autonomy. Consider, for instance, that bots are used by search engines to crawl data and myriad other very productive things. But they can also be used for harm – most commonly in the form of networks of bots (botnets) that enslave computers (called zombies) to do their dirty work. In the age of IoT, we have seen a massive spike in the use of malware to infect internet-connected devices and turn them into botnet zombies.

But the big threat of botnets moving into 2017 and well beyond is their ability to be used in massive DDoS attacks that prey on vulnerable IoT devices – most notably in the Mirai attacks that sparked global attention in late 2016. But what makes botnets and IoT such a potent combination in launching large DDoS strikes? And how did Mirai grow so quickly to be such a potent threat?

On the surface, the process of harvesting botnets is not necessarily an easy exercise. The internet is not particularly “homogeneous”; on it are lots of different computers and devices with different OS versions, connectivity, and patch levels. Not only are they all different in their makeup, they each have different vulnerabilities.

But the home routers, TV-tuner boxes, and thermostats that we see so commonly, make up the Internet-of-things. They are typically very homogeneous, are extremely hard to upgrade or path, and are therefore especially vulnerable to becoming compromised by botnets. What is more, some shipped with the a default password – this has been discussed at length in other readings and it is arguably one of the most direct fixes that will help to slow down this method of infiltration. Additionally, we have seen that the devices used in the major Mirai attacks are all installed on Fairpoint, Comcast and AT&T home networks, so they are easy for hackers to scan for and target.

As the world gains IoT devices at an alarming pace, the threat of large-scale DDoS attacks like Mirai and other potentially more damaging botnet will only increase. In fact, reports from Deloitte and others predict that Mirai attacks will see a pronounced increase in frequency and size in 2017 and beyond – a trend fed most likely from the release of source code for Mirai that was surfaced by Brian Krebs late last year. With new strains of botnets able to enslave a million or more connected devices, Mirai will be a fixture on the CISO’s radar for years to come.

There is no way to prevent DDoS attacks – and certainly no way to prevent a Mirai attack pushing the limits of 1 Tbps or more – but there are very real measures that organizations can take to plan for disruptions and mitigate against these threats. It is not just IoT devices that are vulnerable to being enslaved into a zombie botnet; laptops, desktops, servers, and even printers are common targets.  Organizations, and even ISPs have a responsibility to keeping botnet zombies to a minimum to help prevent the next massive scale attack from hitting the Internet.  And there are tangible steps you can take to hunt zombies.

For more information about DDoS Mitigation and protecting against threats like Mirai, contact us to schedule a demo or try FlowTraq yourself free for 14 days.

Never Fall Victim to DDoS (A Starter Guide to Getting Your Infrastructure Threat-Capable)

Dr. Vincent Berk
By | January 11, 2017


Facebooktwitterlinkedin

hackerThe demands on enterprise CISOs across all markets have never been greater. Not only are they responsible for managing the data security considerations for organizations in the midst of global digital transformation, but they must be aware, educated, and constantly flexible in the midst of a constantly shifting threat landscape. DDoS attacks, internal threats, and other risks to business continuity are all concerns.

However, recent reports suggest that organizations are woefully underprepared to mitigate DDoS risks. Kaspersky Lab recently noted that 40 percent of businesses are unclear about how to protect themselves against targeted cyber attacks and DDoS – with many of them thinking that an IT partner will shield them from such threats. Further, another 30 percent think data center or infrastructure partners will protect them. While organizations all have different security considerations and resources, the numbers from this report add up to one undeniable conclusion: a large percent of businesses are completely incapable of dealing with an inbound DDoS attack.

Not only is this thinking misguided – it is downright dangerous. Recent sub-10Gbps DDoS attacks have managed to persist for multiple days, or even weeks. Such cyber attacks are capable of completely shutting down a medium-sized office location. CIOs and CISOs are responsible for the IT security considerations of their organizations, and their work drives to the very heart of a company’s long-term success and viability. If you have not consciously thought about your DDoS mitigation strategy, you are making a grave mistake.

Understand the Threat Landscape

The first step in truly becoming prepared for external threats, including DDoS, is to understand the scope and significance of the threats to your organization.

DDoS floods your infrastructure with traffic to prevent a service from working. With the recent prevalence of poorly protected IoT devices, many being drafted into DDoS zombies, attackers use massive numbers of connected devices from around the world and marshal them at the target, taking them offline for hours or days, or for those truly unprepared, even weeks. The Krebs DDoS in late 2016 was large and sustained enough that Akamai was forced to stop protecting the website. And while the motivations for attacks are varied, their impact on revenue, brand reputation, and IT credibility are enormous.

Educate and Advocate

DDoS attacks like the one that made headlines last fall get the attention of the C-Suite. But how many CEOs know that experts predict DDoS attacks to grow in size, begin moving upstream, and also start targeting industrial control systems (ICS) and other targets that aren’t sufficiently prepared for DDoS mitigation? Knowing the threats and understanding how to keep the network from being disrupted is half the battle. Advocating for threat detection and prevention tools and technologies is vital.

As security software and systems are becoming increasingly better able to defend themselves, attackers resort to the crude and unsophisticated DDoS attack vector as an alternative. Although not as crafty as a true compromise, a DDoS attack can be equally destructive. Seriously disrupting communications to ICS systems controlling the power grid, industrial plants, and even DOT traffic control systems can have deadly consequences.

Be Proactive, Not Reactive

Waiting for your organization to be targeted or procrastinating security improvements due to lack of budget or man hours is a recipe for failure. The following are three steps to getting your infrastructure threat-capable.
1. Know your normal. The first step in detecting abnormal behaviors is to know and understand how your network functions day-to-day.
2. Understand your threat surface. Understand the different types of DDoS attacks, and evaluate how each of them could impact your network.
3. Plan for failure. Know what your incident response looks like in case of a security breach, and develop best practices to mitigate against network failures.
4. And finally: ask for help! If your network is in jeopardy of harm from bad actors, don’t wait to start the first conversation with qualified security specialists.

Conclusion

Getting your IT infrastructure battle-ready is easier than most CISOs think. If you have a network, it produces NetFlow, which allows for the visibility into your assets that you need to stave off bad actors. The threats to your organization have never been more real – and the risks will only grow for an organization that does not remain prepared and vigilant.


Ready to experience FlowTraq for yourself?

To learn more about DDoS Mitigation, reach out today to schedule a demo or try FlowTraq yourself free for 14 days.

Has your network been naughty or nice? Find out how Santa knows!

Dr. Vincent Berk
By | December 19, 2016


Facebooktwitterlinkedin

santa-noradIf you don’t want to be caught off guard this Christmas and get a lump of coal, think twice about entrusting your network security with the watchful eyes of the mythical elves. Spending the holidays in the office with a PR nightmare and a badly compromised network, customer data leaked, and a serious career derailment; just not our idea of a jolly good time.

The bad guys (and Grinches) are out there. One way or another some are going to get in.  The longer they are in your network, the better the chances they get their hands on the juicy goods. This is not wisdom, it is fact. Thousands of systems, from servers to cell phones to printers – there are vulnerabilities, phishing emails clicked, compromised apps are installed, and even the most trusted of employees may turn rogue.  If you are not working under the assumption that the inner sanctum of your network is already in the hands of a hacker, you are making a costly mistake.

And we get it:  you have no red flags today, no evidence of compromise. So why worry?  But keep in mind that no evidence of network compromise” is NOT the same as “evidence of no network compromise. The fact that you don’t know (yet) doesn’t mean it didn’t happen!  Never seen Santa fly around in his sleigh? Does that mean he’s not there?  Or does it mean you didn’t look hard enough?

At FlowTraq we understand the difference between guessing and knowing. One day a machine comes under the control of a bad guy. No matter how good the tricks to hide the hack on the system, the communication remains visible on the network. Because “under control” means the hacker must communicate with the hacked machine. These are often small communications; ICMP sessions with more packets flowing back than coming in; TCP streams on high ports every 6 hours on the dot; UDP packets, one out, one back, each time. Using full fidelity flow data, even the smallest command & control channel is recorded, no matter how big the network.

Sometimes we catch a compromise because the hacker’s IP address is known. Sometimes we catch a new backdoor service sprouting up in your network. Sometimes it is the stream of valuables leaving your network on their way to the hacker. And sometimes a connection just “looked odd”. FlowTraq let’s you know, and it lets you investigate what happened before and after.

And when you catch the bad guy: use FlowTraq’s full fidelity forensic history to find out who else on your network has been naughty in just the same way.  Sweep up, clean them out.  And if you must, make a list and check it twice.  That’s Santa’s way. Don’t just trust it. Know it.


 

Ready to experience FlowTraq for yourself?

Request a product demonstration or start your free trial now! Your security will never be the same.

 

FlowTraq and Verisign Collaborate to Help with Hybrid DDoS Detection

Dr. Vincent Berk
By | April 21, 2016


Facebooktwitterlinkedin

There’s no denying that distributed denial of service (DDoS) attacks have become more sophisticated and frequent in recent years. In fact, in a recent report from Verisign*, DDoS activity in Q3 2015 increased to the highest it has been in any quarter over the last two years. Also in Q3 2015, the average attack size increased to 7 Gbps, 27 percent higher than the previous quarter.

These new realities require a new class of solutions that are built for today’s threats and environments. To combat the increasing number of threats caused by DDoS attacks, FlowTraq and Verisign have partnered, allowing joint customers to protect their networks through fast detection and mitigation. 

As traffic hits the network, FlowTraq uses advanced algorithms to quickly and effectively distinguish good traffic from harmful attacks. When a DDoS attack is detected, Flowtraq automatically alerts the Verisign DDoS Protection Service, passing on attack intel and giving customers the opportunity to request subsequent mitigation support in the Verisign cloud. Upon receiving an alert, Verisign can redirect internet traffic to their global network of DDoS Protection sites via the cloud, minimizing impact on the customer’s network.

To learn more about the partnership between FlowTraq and Verisign, download the Collaboration Between Flowtraq and Verisign for Hybrid DDoS Protection solution brief.

*Verisign, Distributed Denial of Service Trends Report Issue 2–3rd Quarter 2015 


View the most up to date Distributed Denial of Service Trends Report Volume 4, Issue 2–2nd Quarter 2017 below:

 

Reviewing the latest FlowTraq Release – version Q1 2016

Dr. Vincent Berk
By | March 1, 2016


Facebooktwitterlinkedin

Our new FlowTraq release (version Q1 2016) is updated with many improvements to reporting, alerting, and DDoS mitigation options, all features inspired by direct feedback from our existing users. This release supports easy creation of security event management reports, addition of notes to security events, manual approval or removal of mitigation actions, and null-routing/blackhole-routing of IPs under DDoS attack. Read the details below and contact us at info@flowtraq.com if you want to see a demo before you download the update.

Report on all or any subset of your alerts. You can now quickly generate a printable report on your alerts, including summary and averages information, as well as pie charts for Alerts by Partition, Traffic Groups, Severity and Types of Alerts. The selection of the alert and the time range to include use the existing filtering system, so your reports can be customized to your liking. The reports are printer-friendly and a direct link to the reports can be emailed for convenient sharing. Exporting to CSV has become an option for custom processing of alert data.

A host of improvements to the alerting system is included as well; you can now add notes and annotate an alert, including using @style addressing and filtering. For example, adding @soc to the alerts note field provides an @soc filter choice on the Alerts page. You can also filter on Notes > Annotated to see all alerts with notes. Alerts detail is also improved, featuring a printer-friendly alert format, and direct links to individual alerts makes it easier to email and export the details of each security event.

To improve operator control of plugin/mitigation actions, we added the ability to place actions “on hold” as an initial state. This allows the mitigation action to be queued up, but not taken, until the operator decides the action is approved. This allows for a progressive mitigation scenario where lower-volume DDoS attacks require operator intervention, while higher-volume DDoS attacks are automatically remediated.  It is also possible to revoke plugin actions, and restart cancelled plugin actions.

Alert emails now support most types of email servers and authentication methods. More detailed information is sent in every email, including a URL that gives the operator a quick link back to the alert in FlowTraq.

A free BGP null-route plugin is now available that offers a lower-cost alternative for mitigating inbound DDoS attacks. Especially in data center and ISP environments, the best DDoS defense often is simply removing the target IP from the routed space, allowing traffic to other customers to continue without interruption.

We have also added a new policy detection type for inbound packet rates on any port. The typical scenario allows the operator to set lower, generic trigger points, whitelisting only those service ports that are known to be in common use.

Finally, we added performance and usability improvements to many parts of the FlowTraq user interface.

Want to try a demo before you upgrade? Contact us today at info@flowtraq.com.

What it Means to Fight Smart

Dr. Vincent Berk
By | February 26, 2016


Facebooktwitterlinkedin

We joined Gigamon on Wednesday for their NYSE closing bell ceremony, which marked the start of their #WeFightSmart initiative. The reasons this is a smart fight are technical in nature. Let me explain why we should all pay attention.
 
The number of connected devices, as well as the volume of data we consume, continues to grow every year. This means the number of bits, packets, and connections on networks is continuing its exponential rise. The more bits there are to secure, the easier it is for dedicated cyber criminals to hide. Here’s why:
 
1. A smart adversary understands stealth. Broad sweeps of IP address space and scanning for vulnerabilities across subnet boundaries are a sure way to get noticed – and then locked out. But careful use of compromised systems allows the attacker to gain a bigger presence in your network without being easily noticed.
 
2. Once inside, a smart attacker will need very little bandwidth to penetrate further and gain an understanding of your network’s most valuable assets, while simultaneously discovering your defensive blind spots. Taking only what is valuable reduces the chances of being noticed.
 
3. Now that the perpetrator knows your network and your defenses, and is cloaked as a legitimate user, careful exfiltration of sensitive assets becomes almost trivial.
 
The point is that there is a lot of legitimate traffic to hide in!
 
When I landed my first job securing networks, I picked through individual packets to find the communications that were suspicious, tracking movements by hackers as they went about their work. Subnets were small, uplinks were diminutive, and traffic volumes next to nothing. This has changed dramatically.
 
Fighting smart means figuring out what data is worth analyzing, because the bottom-up approach is no longer feasible. Instead we must shift our focus to providing the smart defender with the right level of detail at every step, so they can follow their instincts and focus in on attacker movement. And this forms the essence of cyber hunting, which is the art of defending your network by actively hunting down those who pose the biggest threat.
 
The two pillars of effective defense through cyber hunting are focus, and visibility:
 
FOCUS – We recognize that the human brain is still the best anomaly detector ever built. But the data must be in the right form. Large swaths of packet traces are useless; instead we must focus. To see the forest, not just the trees, we must automate what can easily be automated. The value of full-fidelity, 1-on-1 NetFlow/sFlow/IPFIX has long been a key asset to the cyber defender. Adding packet-level metadata allows for a substantial improvement of analyst effectiveness. The longer we can enable the analyst to work at the flow level before they must resort to packet-level detail, the faster they will be able to work.
 
VISIBILITY – It is useless to try to defend a network that you cannot see into. If your telemetry stops at the border, you might as well stop trying. Building an infrastructure that allows the analyst to collect data at every network intersection enables detection and investigation of lateral attacker movement, reconnaissance, and data leakage.
 
Putting the right tools in place for visibility and focus saves the analyst much time. And time is the most valuable asset a cyber defender has. More threats investigated means more bad actors stopped in their tracks.
 
Cyber security is no longer about waiting for the light to go red. Instead it is about actively seeking and disabling threats. Gigamon has built the very foundation of this, but a foundation that is both necessary and invaluable. Leveraging their data collection infrastructure, we can focus the analyst to fight smart, and give them the levels of visibility they need to zero in on the threat, without being overwhelmed. 

Correlating NetFlow, sFlow, and IPFIX with Splunk

Dr. Vincent Berk
By | February 25, 2016


Facebooktwitterlinkedin

In a recent study, we discovered that over 53% of all FlowTraq users also use Splunk in some capacity. The majority indicates they stream FlowTraq security alerts generated from their flow data into Splunk, but less than half have actually installed the FlowTraq Splunk App.

We think this should change, and here’s why.

FlowTraq provides powerful alerts on network events – such as unwanted data leakage, DDoS attacks, and botnet traffic – using any of the regular flow formats (sFlow, NetFlow, or IPFIX). These alerts can be streamed in any number of ways, but most commonly they are sent to tools like Splunk, which is great, because analyst searches can quickly correlate network events with security detections in FlowTraq. But doing the reverse isn’t quite as easy. Not all security events – e.g., failed login, failed privilege escalation attempt, or unexpected software condition – may be detectable in network traffic and it’s not immediately straightforward to correlate such host-based events to possible evidence on the network.

The FlowTraq for Splunk App was designed to give the analyst this very power.

By presenting the familiar Splunk “search” interface together with a graph of the network traffic, network peers, and other relevant traffic details, the analyst can quickly correlate host-based events to traffic records stored in FlowTraq – in fact, all searching and time navigation is honored in the view. By way of example, as hundreds of failed login attempts are discovered for a host, a quick search would reveal both the pattern of network traffic and all the system events associated with it.

Splunk and FlowTraq

It works both ways. Say FlowTraq detects a possible data leak from a system in your network, and a syslog event was sent to Splunk. Upon investigating this FlowTraq alert, the analyst can broaden the scope to discover other relevant security events, such as a successful remote login and subsequent access to data resources. The successful login would immediately lead to evidence in the flow data of earlier connection attempts to other systems in your network.

Splunk and FlowTraq

This all starts to build the full picture of what’s going on: a remote attacker has gained valid credentials through a phishing attempt, and has slowly infiltrated dozens of systems in your network. Traces of successful logins and the many network connection attempts are brought together to provide the incriminating evidence. The data leak is discovered and stopped, and all other compromised accounts cleaned up.

Disaster avoided.

If you’re using Splunk and FlowTraq, be sure to get the free FlowTraq for Splunk App. Not yet using FlowTraq? Request a free trial today.

Optimizing Your Netflow Analysis System to Maximize Security Detection and Query Performance

Dr. Vincent Berk
By | August 19, 2015


Facebooktwitterlinkedin

Show me a security analyst who doesn’t want fast answers.

If your security operations center is under attack right now, or you want to perform immediate queries on stored network traffic flow records to figure out the source or effects of an attack, performance is essential.

Network traffic flow analysis and security detection can be most effective – and extremely fast – when set up in a parallel processing server environment. Fundamentally, FlowTraq’s architecture allows you to handle an unlimited incoming flow rate by adapting the architecture to the available environment.

If the underlying server architecture is not properly configured, a security analyst who uses FlowTraq may experience longer query times and sluggish performance. Most of these limitations are a direct result of the realities of modern hardware platforms – and it helps to be aware of them so you can avoid running into them. Here are some limits you may be faced with when building your FlowTraq cluster and how to work around those limits:

25K Flows Per Second (fps)
FlowTraq recommends that a server with 8 cores and 64GB of RAM handle no more than 25,000 flow updates per second of peacetime traffic. Although modern hardware is capable of handling more flow updates (single servers are reportedly capable of 10x that level), there comes a point where an analyst feels the system becomes “too slow” for analysis tasks. Specifically, as forensic recall into a flow history is limited by the IO subsystem, 25Kfps is a reasonable rate for a medium-grade server. The easiest way to handle a higher flow update rate is to build a cluster of FlowTraq systems, each capable of 25Kfps.

100Kfps
Since unpacking and processing flow records takes a fixed amount of CPU time, there is a limit where an individual thread running on a single CPU core can no longer keep up – and it starts dropping records. This limit on average processors lies around 100Kfps, ranging toward 200Kfps for more powerful server processors. Since the operating system needs time to receive a NetFlow packet and put it in a queue to the FlowTraq application, it can be difficult to avoid this limit. The easiest trick is to open multiple ports for flow ingress and direct different exporters toward different ports. Each port will be handled by a separate thread, avoiding packet drops.

800Kfps
When a single FlowTraq portal is used to receive and re-distribute flow records to workers, it functions as a smart load balancer for the FlowTraq cluster. At 800,000 flow update records per second, the ingress and egress of flow records starts to approach 1Gbps. This means that a single 1Gbps network card would saturate and records would start to drop. To avoid this bottleneck, you can either use multiple 1Gbps network cards, or move to 10Gbps networking hardware.

1.2Mfps
When using 10 or 12 NetFlow ingress ports for a FlowTraq portal, some inherent limits in computer hardware start to become apparent. Contention on IO resources for the network hardware, as well as multiple CPU cores attempting to access RAM put a fuzzy limit on the maximum amount of flow a single FlowTraq portal can distribute. This limit lies between 1.2M and 3Mfps for the most modern hardware. Working around this limitation is straightforward: use multiple load balancing portals, and this limit disappears.

2Mfps
At 2,000,000 flow updates per second moving through the network cards, we observe a data rate limit in the PCI2 communication system. Adding network hardware does not alleviate this limit, as the PCI bus is the common shared resource for all IO, and this limit is unavoidable. Thankfully, the PCI3 standard is pretty ubiquitous right now – in our lab, we zoom right past the 2Mfps limit on a single distribution portal.

3.2Mfps
At 3,200,000 flow updates per second we have reached the limit of what a single-level FlowTraq cluster using average hardware is recommended to handle. At full fidelity, NetFlow reporting a 3.2Mfps flow rate represents over 3TB per second of network traffic flowing through your network.

(That’s a lot.)

Breaking through this limit and handling a higher flow rate with FlowTraq is actually rather trivial: add a layer to your cluster, or consider the use of multiple distribution portals, possibly geographically distributed, each with their own analysis worker cluster.

The bottom line
A single pane-of-glass view of very large networks is extremely powerful. And FlowTraq offers you this view, regardless of the size of your network.

Today we take advantage of multi-core processors, optimized data storage and clustering to provide the highest-performance, most scalable NetFlow processing available. As hardware gets faster, we will see real-time detection speed continue to fly, as well as the ability to run fast queries over longer periods of stored network traffic records. FlowTraq remains scalable, regardless of how large your traffic volumes grow, especially when provisioned with the underlying clustered servers as recommended.

Want to learn more? Get in touch or take FlowTraq for a test drive with a 14-day free trial.

Cyber Hunting is No Safari

Dr. Vincent Berk
By | June 23, 2015


Facebooktwitterlinkedin

In 2013, Richard Stotts and Scot Lippenholz from Booz Allen Hamilton wrote “Cyber Hunting: Proactively Track Anomalies to Inform Risk Decisions,” an article that captures the essence of the seismic shift that is needed to keep sensitive data on the correct side of your network border. Far greater is their reality than they probably understood when they initially wrote the article.

At FlowTraq, we’re frequently invited to help equip select individuals – chosen for their significant knowledge of networks and cyber threats – to be part of exclusive “cyber hunting” teams.

These are not the operators configuring switches and routers, and they are also not the analysts that pore over collected logs in SIEMs. These are small, specialized teams that actively seek things that are amiss, track intruders, and catalog the actions of known enemies.

We call them the “network cowboys.”

They spend their time chasing whatever piques their interest through complex networks, often comprising dozens of locations, and tens of thousands of systems. Their biggest discoveries are preceded by a simple: “Now that’s interesting…” And their biggest victories are never known, because the crippling data exfiltration did NOT happen. More and more organizations are coming to realize that a small and dedicated team of network cowboys may be their best bet in preventing the embarrassing data leaks that can be found in the newspapers every morning.

What sets the network cowboys and cyber hunters apart from regular incident responders is that they are given full rein to investigate ANY network activity – incident or no incident. They’re typically not constrained by playbooks or extensive procedure sets, and they’re not bogged down by an ever-growing inbox of “to-do” and “must-do” tasks. The experience of these individuals typically allows them to gain a deep understanding of traffic patterns, what is normal, and what is curiously abnormal. This is a skill that is acquired through years of incident investigation and experience. It cannot be learned in a classroom. And that’s why this is hard.

Hunt teams are often struggling for sufficient access to the telemetry they need to chase the threat and fully understand it. And passive telemetry is most valuable; above all, they realize that an intelligent adversary cannot be stopped trivially. And every move to block an adversary will educate them on your capabilities, so act carefully. It is very important to pick the right weapons before you pull the trigger. Stealth is key, which means that much of the tracking data is gathered quietly and continuously. This is where FlowTraq has been a powerful ally.

NetFlow is ubiquitously available. All routers, switches, firewalls, and load balancers can export it. And few attackers would find it strange to see large streams of it crossing a network. Using FlowTraq’s scalable framework, network cowboys can collect a 1-on-1, un-sampled, forensically accurate track record of traffic across all network planes – which is then rapidly searched during the hunt. When tracking an adversary, speed is of the essence – therefore, NetFlow is the data source of choice. Answers on week-old data come back in seconds, months-old in minutes. Only when absolutely necessary will a full packet capture trace be needed, and the analyst will know exactly what to query a fullcap engine for when the time comes. The details are in the NetFlow.

Often single communications – small, periodic, or curiously timed – enable the analyst to discover stepping stones, command&control channels, and possible data exfiltrations. What sets the cowboy apart from your NSOC operators is that the cowboy chases the UNKNOWN threats, while the NSOC monitors for known malware, known C&C channels, and other known viral information. The communications that the cyber hunter is looking for typically do not set off any alarms anywhere – except for the mental alarms in the cyber hunter. I must therefore stress that you cannot simply replace your NSOC. Cyber hunters are a distinct and separate capability that you must consider deploying.

But even with the right mandate, telemetry, and tools, the job is not an easy one. As any defender knows: for every action there is a reaction. Blocking traffic, moving assets or sensitive data, etc. are only effective for a short time. The adversary will realize a defense was put in play, and will react accordingly. New vectors WILL be tried. If you build a wall, your adversary will figure out how to crawl under, climb over, or run around it. If you’re not continually watching, you surely will fall victim. And even when you are watching, you are never truly safe. But with a good team of network cowboys, you’re just that much safer.

Cyber hunting is no safari. Sometimes the tiger wins.

Speeding MTTR: Accelerating the 4 Phases of Resolution

Dr. Vincent Berk
By | May 20, 2015


Facebooktwitterlinkedin

Every minute counts when it comes to resolving a security breach or data leak—that’s why mean time to resolution (MTTR) is a key performance indicator. Because time is money, having a plan for resolving these types of network attacks quickly is paramount to preserving a company’s reputation, customer relationships, and bottom line.

When it comes to identifying and resolving security issues, there are four key phases: awareness, root cause analysis, remediation, and testing. Expediting each phase can help significantly reduce MTTR. Here, we outline these key phases and provide you with insight into the types of tools and visibility required to accelerate them:

1. Awareness
You can’t solve a problem unless you know you have one. So recognizing that you have an issue—awareness—is the first step. It can sometimes take weeks, even months to detect a data breach or data leak, because these sorts of cybercrimes often fly under the radar until they are finally discovered. Imagine if Sony, Anthem, Home Depot and Target had identified their data leaks sooner. How could it have helped reduce the fallout?

While many intrusion detection tools can help flag DDoS attacks, brute-force attempts, botnets and other external threats, data breaches are much more difficult to identify. Because of this, network administrators need to look beyond their intrusion detection tools in order to pinpoint a larger, more severe problem like a data breach before too much damage has been done.

Network behavior intelligence tools provide visibility into your network that detection tools can’t. They enable you to understand normal traffic patterns so when anomalies do occur you can spot them. By analyzing network traffic flow records and recognizing both hosts initiating a connection and hosts receiving data outside of normal thresholds (or exhibiting unexpected network behavior patterns) you can flag potential data leaks in real time.

For example, maybe your organization has been receiving suspicious emails at unusual times, or your network is running really slowly, or confidential documents have appeared outside of your company firewalls. No matter how good your detection tools are, data exfiltration is more often revealed through these types of secondary indicators, which may point to lateral movement in your network making your organization an unknowing participant in an attack. But by providing a macro view of all aspects of your network infrastructure with network behavior intelligence tools, administrators can detect potential problems on the spot.

2. Root Cause Analysis
Once you recognize that your company is potentially under attack, you have to determine the source so you can immediately mitigate it. In the root cause analysis phase you quickly piece together what you know and draw a conclusion. To use an analogy: You m smell smoke in your house and your fire alarm goes off, but if you open the doors and windows to let the smoke out and take the battery out of the smoke alarm you’re just addressing the symptoms, you’re not solving the problem. You’re actually making it worse. In this case you know that a fire is the root cause of your problem and that you need to put it out, quickly (but you don’t know yet what started it—that investigation will be necessary later).

When it comes to performing a root cause analysis on a network breach you need to first determine which computers are leaking. Are they still leaking? Has the hardware failed? Is there a faulty router? And you need a quick fix to “put out the fire.” Afterwards you will need to go back and perform forensics to have a deeper understanding of what happened and how you can prevent it from happening again.

Identifying the root cause is the most challenging step to resolution because there is so much variability. In the case of a data breach, nothing is “broken” so you won’t be able to diagnose the problem with detection tools. But network visibility tools provide a window into your network, which enables you to correlate events, isolate specific suspicious IP addresses, and pinpoint the root cause of a leak much more quickly to speed MTTR. The tools and processes you have in place will determine how long this step will take and how much it will cost you.

3. Remediation
Once you’ve identified the root cause of the problem, you can fix it. This remediation phase is a quick fix to the problem to stop the leaking, but not a long-term solution. It’s more like addressing the symptom, not the problem.

Every network is different and remediation is going to be determined by your primary network function. It could involve shutting down computers, creating new firewalls, resetting passwords, and any number of other things to stop the leaking and get your network operational again. But remediation is not meant to be a long-term solution. You will need to take the time afterwards to go back and find out what caused the leak in the first place through forensics. It’s important to keep in mind that remediation is never a one-size-fits-all solution, it’s more of a many-to-many solution, but network visibility tools provide you with much more knowledge about how to find the problem and how to remediate it.

4. Testing
Once the problem has been remediated you need to test your solution. Look at historical tools, current traffic patterns and whether the data leakage is still occurring. If it is, you likely haven’t found the root cause—or maybe you have, but you haven’t effectively remediated it. You will need to repeat steps 1-3 until you can confidently determine that, after testing the solution, it’s working.

The unfortunate reality is, once people scramble to put the fire out, they often don’t spend a lot of time on the aftermath of the breach—that’s why it’s so important to go deeper through forensics and ensure that you’ve developed a more permanent solution to keep your network safe in the future.

THE RIGHT SOLUTION FOR SPEEDING TIME TO RESOLUTION

At almost every organization, network downtime is inevitable. But while detection tools are important for flagging certain types of network security threats, when it comes to identifying data leaks and security breaches, you need network visibility tools. Instead of wasting time guessing what could have gone wrong, you can use network data to pinpoint the precise time that the problem first occurred. But not all network visibility tools are created equal:

Full packet capture solutions
Full packet capture solutions let you look deep inside the packets, and provide a granular view of data, but they do not provide a long enough history to be useful in an investigation, nor are they fast enough for real-time analysis. Full packet capture solutions cannot scale to the level or speed that’s required for root cause analysis or forensics and they cannot provide you with the network data you need beyond a few weeks. Without sufficient traffic history you won’t be able to answer questions such as: Where did the leak originate? Where was the data sent? And are you still leaking? You need full-fidelity tools for that.

Full-fidelity tools
After a data breach or security leak, you will need full network data recall that goes back far enough to stage a meaningful incident response and forensic investigation to understand what happened, how long it was happening and what other systems may have been affected. Full-fidelity network visibility tools are much more thorough and can help speed MTTR because of their fast detection, scalability, and access to complete historical data. This allows network administrators to analyze when the problem first occurred and trace it back to its source(s). If you want to protect your organization from the damage of a potential data leak, full-fidelity network visibility tools will be your best defense.

PLAN AHEAD

As perpetrators develop more sophisticated ways to steal sensitive data and breach our security systems, organizations must develop more sophisticated defenses. Understanding normal patterns of behavior on your network through full-fidelity network visibility tools is your best strategy so that when anomalies occur, you can spot them immediately, act fast and stop the leaking as quickly as possible.

Next Page »