In part two of this series, we discussed how to implement the identification and containment steps in the DDoS mitigation management framework. In part three, we will look at the steps you should take once you have begun attack mitigation. In particular, we will discuss the importance of performing a proper postmortem in order to identify opportunities for continuous improvement.
In a perfect world, our containment steps will remove 100% of the DDoS attack’s impact. In reality, this is rarely the case.
You need to go through your list of services and identify how much downtime the business can accept. For services that are not tolerant of downtime, you need to identify backup processes. Your Business Continuity Plan (BCP) and Disaster Recovery Plan (DRP) should have contingencies for dealing with a long term DDoS attack. Again, we’ve seen DDoS attacks last as long as 12 days in the wild. So the business needs to be able to continue operation over extended periods of time.
In many ways, this is the most difficult step to get right as there are so many variables. We tend to forget just how reliant we are on Internet connectivity for our day to day business operations. For example, what about remote employees? Will they be able to work if the VPN is offline? Do meetings get scheduled through a third party SaaS service which means employees may not be able to book meeting rooms or receive notification of upcoming events?
Once the DDoS attack subsides, we can breath a sigh of relief. Or can we? A common attack technique is to pivot between multiple attack vectors. For example, an attack may start as an ICMP flood. Once you take steps to mitigate it, the attack transforms into a UDP reflection attack. Once you get that under control, an Internet of Things (IoT) bot army starts exhausting TCP connections. So we need to be sure we have actually entered recovery mode before we start to lean back.
Attackers like to leverage a bit of social engineering when they launch an attack. For example, they may launch the first wave at 5:15 PM local time on a Friday. This is in the hopes that many of the people who are responsible for mitigating attacks will be in their car heading home for the weekend. Once that attack subsides, they wait about an hour which is just enough time for someone to give the all clear signal before launching a second wave of the attack. This is in the hopes of again, catching the team off guard. They then wait until 11:00 PM local time or later to attack again, in the hopes of catching people sleeping. This pattern helps to increase the likelihood of increasing the time it takes to mitigate a specific attack vector. It is also designed to disrupt responders mentally so they are more likely to make a mistake.
When you think you have entered a recovery phase, analyze the mitigation steps that have been taken. With some, you may be able to leave the mitigation technique in place with very little impact (like filtering a few source IP addresses). Some may result in performance degradation during normal periods of operation (like rate limiting). Still, others may have a financial impact to leaving them in place when they are not really needed (like using a third party scrubbing service). Identify the long term impact of each mitigation technique and determine a service cycle for removing them. As they are removed, be sure to closely monitor your metrics for any unsuspected activity.
If “Preparation” is the most important DDoS mitigation management step, “Lessons Learned” is a close second. Attackers are constantly evaluating the success of their attacks in order to make future attacks more effective. You need to be doing the same with your protection techniques. Every DDoS event should be followed up by a postmortem. This should be attended by the people involved with mitigating the attack, as well as those authorized to make changes and improvements.
The postmortem should be run in a similar fashion to a Scrum retrospective. You want to have an open and honest discussion about what went right and what needs improvement. The focus should be on process improvement, not on placing blame on any one team or team member. When a human makes a mistake, you can usually point to a process that needs more refinement or improvement. The trick is ensuring that mistakes only happen once.
As mentioned earlier, DDoS attacks tend to come in waves. You may be asking, why not simply hit you with everything they have right off the bat? This is because attackers leverage mitigation time to extend the impact of their attack. If they hit you with everything all at once, you would simply deploy multiple techniques simultaneously to mitigate the attack. By spreading out their attacks, they can increase the attack’s impact by leverage the amount of time it takes you to detect the attack, identify the attack’s unique characteristics, and implement a mitigation solution.
Think of it this way. Let’s say it takes you 30 minutes to go from initial detection to full mitigation. If I hit you with three attacks at once, the magnitude will be high but the impact will only be felt for 30 minutes. If I launch each attack 30 minutes apart, I’ve now disrupted your network for a full hour and a half. This is more than a sufficient amount of time for your customers to notice, post the outage on Twitter, get confirmed and picked up for bloggers and analysts, etc.
When you encounter an attack vector for the first time, the process is going to be fairly manual. An analyst will need to verify the attack, unique properties will need to be extracted, and runbooks must be followed in order to implement a proper mitigation strategy. However, once you’ve gone through the full process, there is absolutely no reason not to automate future responses. In fact, a key discussion point at each postmortem should be “How can we accelerate our response time if this same attack vector is seen in the future?”.
Consider the “three attacks over an hour and a half” scenario discussed earlier. While a half hour is actually a pretty good response time with a manual process, over multiple attack waves this can create a serious impact to the business. It is not hard for an attacker to change their attack vector every 30 minutes. You can easily find yourself in a position where the bad guys are constantly outflanking you. However, by automating the process, you can easily reduce your response time to five minutes or less. This now makes it far more difficult for the attacker to see if their attack vector is having any impact at all. They no longer have a functional block of time during which they can evaluate your points of vulnerability. So automation not only improves your response, it makes implementation more difficult for those launching the attack.
So a good DDoS mitigation management system is key to reducing response time. A proper system should have the ability to monitor traffic patterns for those unique characteristics identified by your analysts. The system should then be able to follow a defined set of rules to identify the proper mitigation technique to be used for this specific situation. Finally, the system should be capable of implementing your runbook steps by deploying the proper mitigation response. This reduces your response time as much as possible. It also ensures that your analysts stay focused on emerging threats rather than constantly revising old ones.
In this series, we discussed the different steps included in the DDoS mitigation management framework. We discussed the importance of proper preparation, as well as performing a postmortem. We also discussed the importance of reducing mitigation time as much as possible, as well as implementing a proper DDoS mitigation management system.
In part one of this article, we discussed that DDoS mitigation management framework is very similar to standard incident handling procedures. We discussed at length the importance of preparation and performing a risk assessment. In part two of this series, I’ll cover identification and containment. These are our initial reaction steps once a DDoS attack has taken place.
In order to mitigate a DDoS attack, we first need to be able to identify when one is taking place. While this sounds pretty straightforward, it can actually be fairly nuanced. I’ve seen a few situations where what was thought to be a DDoS attack turned out to be an extremely successful marketing campaign.
In order to identify abnormal traffic patterns, you need to first understand what is normal for your network. I see this mistake quite frequently. No one looks at traffic until there is a problem. However when the traffic is analyzed, it is difficult to determine which traffic patterns are perfectly normal and which are contributing to the issue at hand. So we need to start by creating a baseline of normal traffic patterns for our network. There are many tools on the market that can help you monitor your network. Which tools are best will depend on your configuration. So rather than focusing on tools, I’ll focus on the metrics you should be monitoring regardless of which tools you decide to use.
Monitoring should take place at any junction points between your systems and potential attack sources. For most organizations, this will be our Internet connection(s). However, for some environments, it may include internal systems as well. Examples would be computers located in unsecured areas, permitting public access to our wireless network, permitting individuals to connect their personal systems or having few protections in place to prevent internal systems from becoming compromised. For critical servers, we will also want to be able to monitor performance on the host itself. Remember that some in-protocol attacks can cause only a slight increase in network bandwidth, while still taking a server offline.
Critical metrics to monitor include:
All of these metrics can be plotted over time.
Once you start collecting metrics, leverage predictable patterns to identify your baseline. You can turn these into alerts to prove out your assumptions. For example, if you determine that normal traffic for your Web server does not exceed 20 Mb/s, configure an alert to trigger at 25 Mb/s. If the alert does not trigger under normal traffic patterns, you’ve proved out your theory. If there are specific periods where your Web server traffic triggers the alert, but investigation shows the traffic is still normal, bump up your alert threshold. Note that networks are an evolving entity. As requirements on your network change over time, expect to revisit your alert thresholds.
Once we have an established baseline, we can sit back and wait for our alerts to trigger. When they do, a deeper analysis must be performed. Again, there are many tools available to help you deep dive into the packets. A trained analyst should be able to spot the attack vector being used. The idea is to extract as many traits about the attack pattern that distinguishes it from normal traffic. This could be the source IP address, the protocol being used or even data contained within the payload. The more unique properties that can be identified, the easier it will be to mitigate the attack.
Once we have identified the impact the attack is having on our network, as well as the properties that make the attack vector unique versus regular traffic, we can move towards containing the effect of the attack. What is being affected will impact where we can try and contain the attack. Is it a single server or our entire link to the Internet? If it is just a single server, we may be able to contain the attack locally. If it is our Internet link, we will need to implement containment somewhere on the Internet side of our link.
Where we implement containment will also drive what options are available to us.
Luckily, there are a number of options available to minimize the impact of a DDoS attacks. Each has their strengths and weaknesses. There is no “one size fits all” here, as different attack vectors will require different mitigation techniques in order to reduce their impact.
If the impact is localized, meaning that your Internet link is okay but specific servers are being impacted, you may be able to leverage your border router or firewall to minimize the attack. This may only be possible if the attack has distinct characteristics such as a small defined group of source IP addresses.
If the attack is localized but not from a small group of source IP addresses, you may be able to resolve the problem using rate limiting. Many firewalls give you the ability to restrict traffic to certain throughput levels. While the attack will still get through, this may reduce the load on your servers enough that they can respond to legitimate data requests.
If the target of the attack is your Web server, you may be able to mitigate the attack by deploying multiple proxies or a CDN. This is actually a good idea anyway, as a good CDN will make your Web site more responsive from all corners of the globe. So even when an attack is not taking place, you gain the benefit of improved performance. There are many third parties and public cloud providers that offer CDN services.
If your link has become saturated, you will need to resolve the problem on the Internet side of your link. Many business class ISPs offer DDoS mitigation services. Your traffic is run through dedicated hardware that removes the attack traffic prior to reaching your Internet link. When this works properly, you should immediately see traffic levels return to normal.
There are a number of companies that offer traffic scrubbing services. These work by modifying DNS or BGP so that all traffic is routed through a scrubbing center prior to reaching your network. This way your traffic is cleaned up prior to being routed to your network. The BGP implementation tends to be far more effective than DNS. This is because it covers all of your IP address space, not just systems with a DNS entry.
Clearly, you need to consider containment during the preparation phase of DDoS mitigation management. The worst time to go looking for a mitigation solution is when an attack is already taking place.. Consider all of the attack vectors that represent an unacceptable business risk and implement processes to contain them accordingly.
You should now have a better understanding of how to implement the identification and containment steps in the DDoS mitigation management framework. In part three of this series, we will discuss what to do once the attack has been mitigated and how to ensure we are continually improving our processes.
While DDoS mitigation management can seem like a bit of a black art, it actually follows standard incident handling procedures. Preparation and process execution are key. You want to confirm that you have a good plan in place before the worst occurs. You also want to ensure that you have a good post mortem process so that you are constantly improving. In this three part series, I’ll walk you through the DDoS mitigation management framework.
As mentioned, DDoS mitigation management follows standard incident handling procedures. This includes a number of required steps, which help bring our network back into normal operation. These steps include:
These steps are shown in Figure 1. Note that this is a reciprocal process. In order to achieve continuous improvement, we assess each mitigation cycle in order to refine and revise the process.
It cannot be stressed enough how important preparation is to DDoS mitigation. It cannot be stressed enough how important preparation is to DDoS mitigation. Consciously or not, this step comes down to performing a risk assessment. We evaluate the potential attack vectors that can be used against us, and make a choice as to how much to invent in mitigating those vectors. I say “consciously or not,” because many organizations choose to ignore the problem until the worst occurs. This ensures that the DDoS attack will have maximum impact. We obviously need to weigh the risk of a DDoS attack against the other risks to the business. However it is better to make conscious choices than it is to simply ignore the potential problem.
Unfortunately, there are many variations of DDoS attacks seen in the wild.
Attackers have become extremely creative in identifying methods to flood our network with bogus traffic. Attackers have become extremely creative in identifying methods to flood our network with bogus traffic. The full spectrum of how these attacks can be executed is too extensive to cover here. However, for the purposes of defending our network, we can place DDoS attacks into one of two categories, out-of-protocol and in-protocol.
Out-of-protocol attacks are when the attacker sends a traffic pattern at your network that is not normally consumed by the target system. For example, a current popular vector is to bounce and amplify traffic off of exposed connectionless LDAP servers. This results in inbound UDP traffic with a source port of 389 and a destination port of a random upper port number. Out-of-protocol attacks are the easiest to identify and mitigate because they do not look like a normal traffic patterns.
In-protocol attacks are attacks against our systems that can, on the surface, look like regular traffic patterns. For example, some attackers have built extremely large botnets. One possible attack vector is to simply have a large bot army start generating connections to a target’s Web server. These look like normal inbound requests for data. However, the intent is to get the Web server so busy that it is unable to respond to legitimate data requests.
Another possible vector is to combine in-protocol traffic with services that are vulnerable to amplification. For example, NTP is used by servers to synchronize time. An attacker can leverage reflection techniques to send a flood of NTP traffic at a target network. This can make it difficult to determine which NTP traffic is legitimate and which is part of the flood. DNS is another common service that is used by everyone. Attackers can leverage reflection to send a large amount of DNS traffic to your DNS servers. Again, it can be difficult to distinguish which traffic is legitimate and which is not.
In-protocol attacks are far more difficult to identify and mitigate than out-of-protocol attacks. Because they target services you are supporting, you typically need a way to deeply analyze the traffic to identify which traffic is illegitimate. Mitigation also becomes more difficult, because the unique patterns that permit you to identify the attack traffic may be embedded in the data payload.
Now that you understand how a DDoS attack can be used against your network, you need to identify the risk this imposes a threat to your business. “Risk” is a product of two factors, probability and magnitude. Probability can be evaluated by looking at your business model. Are you in a high risk vertical like gambling, games or education? Are you a small business that no one has heard of or a large organization with global visibility? Has your organization or one of its leaders expressed political, sociological or economic opinions that could draw ire from certain demographics, cultures or beliefs? Has controversial information been published regarding your organization? All of these criteria factors into your probability of being targeted by a DDoS attack.
The other component of risk is magnitude. In other words, what would be the impact to the business if a DDoS attack was to occur? You first need to look at your points of connection with the Internet. Are servers located on-site, in an external data center with a separate Internet connection or in a public cloud? If your servers are onsite, a single attack could bring all inbound and outbound services. If your servers are external, an attack may bring down your customer facing services, your corporate connection to the Internet, but not necessarily both at the same time.
Next, you need to create an inventory of both inbound and outbound services. Examples of inbound services may be:
Outbound examples may be:
Finally, we need to evaluate how the loss of each of these services would impact your business model. For example, what is the impact if sales cannot reach out to prospects? Do we have SLAs with financial implications if customers cannot access our portal? Which of these services have backup or alternate solutions to fall back on? For example, let’s say your finance team uses an external SaaS service for tracking income and expenditures. Is there some other process they can fall back on if the SaaS service becomes unavailable due to a sustained DDoS attack? It is worth noting that DDoS attacks are becoming more frequent. When they do occur, it is not uncommon to see them last a day or more. In the past year, we have seen DDoS attacks that have lasted up to 12 days. So when assessing impact, you need to evaluate effect over time.
Now that we have a handle on probability and magnitude, it is time to translate this into potential financial losses. Don’t forget to factor in potential customer churn into the financials. While your services have been offline, your customers have been frustrated by the impact to their own business model. It is not uncommon to see competitors leverage this opportunity to go after your more lucrative customers. So while we are adding up the direct cost of the attack to our business model, we also need to factor in potential revenue loss over the near future. Clearly DDoS attack mitigation seems expensive until you evaluate the full scope of potential losses that can occur.
As we move through the additional DDoS mitigation management steps, keep in mind that they are all rooted in preparation. For example, as we talk about identification and containment, keep in mind that acquiring the proper tools and developing the appropriate techniques needs to begin in the preparation phase, before any attack actually occurs. We also want to ensure that preparation includes proper training and simulation drills so that everyone is clear on techniques and procedures.
It should be clear that much of DDoS mitigation management comes down to proper preparation. It is vital that you understand the business risks so that a proper mitigation strategy can be implemented. In part 2 of this article, I’ll go through the identification and containment steps so that you have a clear understanding of how to initially respond to a DDoS attack.
One of the most embarrassing moments as a security analyst or operator is finding out from someone outside of your company that one of your internal hosts has been compromised. While 100% of all attacks can not be prevented, as the security caretaker of your infrastructure, you should at least be able to identify when the worst has occurred. You can do this with a network forensic tool.
Many attackers take steps to hide their tracks once they break in. Network forensics can provides valuable insight that may be otherwise difficult, if not impossible to recover from the compromised host itself without a cyber hunting tool in place.
Let’s walk you through how FlowTraq can be used along side your current security tools to paint a more complete picture of a security incident.
For example, you may pull together the appropriate tools, but then fail to take the time to configure them properly to support a later forensic investigation. It’s important to stop waiting until an incident occurs before sifting through traffic and identifying what is normal traffic and what is not.
Leverage FlowTraq’s ability to identify traffic patterns that are considered “interesting”. For example, consider your corporate Web server: What type of outbound connectivity does it need to support? In other words, we expect to see inbound HTTP and HTTPS connections, but, are there any legitimate outbound traffic patterns being used?
First step is to identify legitimate patterns in advance in order to establish an appropriate baseline.
If your Web server is typical, it generates very few outbound sessions. It may leverage NTP to sync time, HTTP or HTTPS to install patches, and that’s about it. All other connection establishment is generated inbound to the system. Once normal traffic patterns from the Web server are understood this exercise can be repeated with each exposed server.
Next step is to leverage FlowTraq’s Policies to identify patterns that fall outside of this baseline.
FlowTraq is configured to alert you if any part of the system is part of the “External Servers” traffic group transfers more than 10MB out to the Internet. With so much flexibility, you could also choose to set thresholds for total connections, activity on specific ports, or some combination of these options.
A Policy that generates an alert when any of my exposed servers transfer more than 10MB outbound in a single session. Note that defining triggers is not an exact science. One of the reasons to complete this prior to an incident is to see if any false positives get triggered. If they do, simply tweak our thresholds to eliminate the noise. Note that networks are an evolving entity so from time it’s wise to revisit the threshold settings. Once Policies are created, configure FlowTraq to submit alerts to a centralized logging solution. This permits consolidation of all alerts in one place for consistent handling based on severity level.
If your alerts are set up correctly, an alert will be triggered when suspicious network activity hits your network.
A possible example is shown in Figure 1. Note that an internal system has sent approximately 400MB of data to some host out on the Internet. You should not expect to see outbound SSH sessions from the internal servers, let alone ones that transmit large amounts of information. This example is is a pretty good indication that there is an incident worth investigating.
Next verify the information in the alert. FlowTraq will show all outbound session establishment from this server, and the amount of data being transferred in each session. This is shown in Figure 2. Note that our server communicated with two systems. The second entry looks like simple patching or upgrading activity. However, the first session clearly shows that an outbound SSH session performed what looks like two large file transfers to a host in Amazon EC2.
While an incident may have occurred because it is not understood why the servers would be transferring out large files, we don’t know for sure if a compromise has taken place. More data is needed in order to see if any additional suspect activity has taken place.A further analysis is shown in Figure 4. This time you can see that the numbers have been plotted showing outbound connections generated by the server. Note that in less than five hours, the server has sent close to 18.5 million packets to just under a half million systems. Given that each of these sessions contained a small amount of data, we can be pretty certain that server is scanning the Internet for open ports on other systems.
If you look at the timeline in Figures 3 and 4, you’ll see that all of this suspicious activity began around 17:45, or 5:45 PM. This begs the question, did anything suspicious happened just before this timestamp? Luckily FlowTraq can help you out there as well. In Figure 5 , you can see the configuration FlowTraq to show us open port responses from the server just prior to this timestamp. Why are we looking at outbound SYN/ACK packets instead of inbound SYN packets? An inbound SYN packet tells me someone was looking for an open port. An outbound SYN/ACK tells me that an open port was detected. This keeps me from having to weed out any unsuccessful connection attempts.
In Figure 4, we see that another internal system 10.0.0.236, made three connection attempts to our compromised server just prior to that server exhibiting suspect activity. Clearly we should scrutinize this new system as well. This is shown in Figure 6. Note that this system initiated communications with just over 2,000 unique systems. While a lower volume, this activity looks pretty similar to the port scanning activity we identified earlier.
The timestamp in Figure 5 indicates that the port scanning activity started at about 5:15PM. Just like we did with the first server, let’s analyze this second server to see if there was any inbound connections just prior to this suspect activity. This is shown in Figure 7. Note that we see just over 9,000 SSH connections from a new unidentified external IP address (188.8.131.52). This pattern is indicative of a brute force attempt. It is extremely likely that this new IP address attempted to guess a valid login name and password combination. Based on the data, it is also extremely likely that after just over 9,000 tries, they got lucky and gained access to the server. This would explain the outbound port scanning activity. This would also explain why so few connections were made to the first suspicious system via SSH. Once the attacker found a valid logon name password combination to use, the same credentials got them into that first server.
When performing network forensics, we need to follow proper incident handling procedures. This will include an “identification” stage, where we attempt to identify if a security event has taken place and if so, what is the scope of the compromise. With this in mind, it is worth sanity checking what we know so far.
We know that:
– 192.168.18.34 transferred about 400MB to an external system
– 192.168.18.34 scanned close to 500,000 hosts on the Internet
– Just prior to this suspect activity, the internal server 10.0.0.236, connected to 192.168.18.34 via SSH
– 10.0.0.236 scanned just over 2,000 hosts on the Internet
– Just prior to this suspect activity, 184.108.40.206 made over 9,000 connection attempts to the SSH port on 10.0.0.236,
It is worth noting that we collected all of the above evidence though a single tool (FlowTraq) by simply analyzing network traffic. It is also worth noting that if our attacker had taken steps to cover their tracks, it would have no impact on our ability to collect network evidence. Further, we have enough information to identify that two of our servers need to shift to the containment phase of our incident handling process in order to minimize further damage.
Based on what we know, we have some unanswered questions.
The first two questions are important because they address whether we have more compromised systems in need of containment. Luckily, we can leverage FlowTraq to search the rest of our servers to search for additional SSH traffic and similar port scanning activity. The third question speaks to eradication and recovery. We need to identify which account was compromised. Hopefully this information was retained in our system logs. We then want to identify if that account was used to access any other internal systems. We also need to identify the best way to ensure the account can no longer be used to compromise our servers (perhaps change from using passwords to public/private keys with SSH).
By generating a baseline which identifies normal traffic patterns, we were able to set threshold triggers to warn us when suspect activity took place.
FlowTraq provides the anchor to identify the scope and initial point of intrusion in a multi-host compromise. This allows quick movement through the incident handling identification and containment phases. Time and effort was saved as over having to analyze each server on a host by host basis.
Gain efficiency and speed when filtering.
Increase security visibility through new workspaces.
Expansion of API for custom integrations.
FlowTraq is a security tool, and many useful security-relevant work-spaces are available directly in the product.
To broaden security value, FlowTraq is proud to offer a feed of workspaces right on your dashboard.
We update the stream each time relevant workspaces are available for current events.
Many of you have been using FlowTraq as an integral part of your security operations, and we pride ourselves on the ability to embed FlowTraq data in other applications.
We have added powerful new calls to access FlowTraq metadata, and the ability to configure your clusters through our API.
Gain more insight into our updates and learn more by viewing the changelog.
All active customers should have received a link to upgrade in their email, please reach out if you have any questions.
Book a personalized demo today to see what’s new!
DDoS (distributed denial of service) attacks pose some of the biggest cybersecurity threats to organizations today. Due to their distributed nature, they are difficult to defend against, causing website and network disruption for organizations large and small. Here are 10 things you should know about DDoS attacks and how you can address them.
Cybercrimes are on the rise and DDoS attacks are among the most common. Roughly half of all companies today have been victims of DDoS attacks, bringing business to a grinding halt — particularly organizations such as online retailers and banks that have a heavy Web services component or depend on internal network services. In 2014, DDoS attacks reached an average rate of 28 per hour(1) and continue to grow in terms of scope, frequency and complexity, making them harder and harder to fend off. If your company hasn’t already been under attack, it could just be a matter of time.
DDoS perpetrators are not a specialized breed. They can be university-educated or homegrown. They can reside overseas or in your own backyard. Basically, hackers can come from anywhere. In terms of the cybercrime landscape, DDoS attacks are relatively simple to carry out and don’t require specialized training or even a computer science degree. And for anyone with malicious intent, attack toolkits can be purchased on the Web for an affordable price. Whether the motivation is political, social, geographical, financial, competitive or downright destructive, anyone, anywhere can coordinate an attack if he or she wants to. So you need to be prepared.
Though we tend to hear about the huge organizations that have been victims of DDoS attacks, smaller, lesser-known companies can be just as vulnerable. They may not have the enormous customer base, which makes large organizations a desirable target, but smaller companies tend to have less rigorous security. While major online industries such as financial services, online gaming, entertainment, news, and retail have typically been the most vulnerable, perpetrators will target any organization with a significant Web presence.
Like homes that are broken into multiple times, vulnerable organizations are not immune from multiple DDoS attacks. The bigger the potential damage, the more likely companies are to be susceptible to multiple attacks. In fact, more than 42% of the organizations monitored by one DDoS protection company were hit more than once, and 2.5% were attacked repeatedly over 10 times.(2)
While most companies have invested in some type of cybersecurity solution, they often fall short of deploying the correct visibility tools to help them understand what’s happening. Intrusion prevention systems like firewalls and routers cannot prevent DDoS attacks. Rather, they can actually exacerbate outages by causing traffic bottlenecks. On average, DDoS attacks aren’t usually detected until 4.5 hours after they commence, and it takes another 4.9 hours before mitigation can begin.(3) That means most companies under attack have already suffered irreparable damage, even before they realize it. Because DDoS attacks can involve forging hundreds of thousands of IP sender addresses, the location of attacking machines cannot be easily identified. To ward off attacks, you need a solution that can react within seconds, not minutes.
DDoS attacks are not just a nuisance, they can cripple your bottom line. Attacks result in lost worker output, potential penalties for non-compliance, which can be costly, and revenue loss from customer defection. Sometimes attackers demand a ransom from site owners, which only adds to financial losses. According to IDG, company downtime costs average $100,000 an hour which means DDoS attacks can cost you at least $1 million, even before you begin to mitigate the attack.(4)
Most DDoS attacks do not attempt to breach a company’s network, but rather overwhelm it with traffic so it comes to a halt. Increasingly though, these attacks are being used as “smokescreens” to distract from the real intent — data breaches — which are far more damaging than the problems caused from a website going down. DDoS attacks are extremely disruptive and distracting for the security operations teams, but more importantly, they allow other behavior such as reconnaissance and compromise attempts to fly under the radar. By launching a significant DDoS attack, a hacker stands a much better chance of breaking into your systems or exfiltrating sensitive data undetected.
DDoS attacks don’t only hurt brands financially, they damage your reputation and even more importantly, undermine customer trust. Customers realize that if you can’t keep their personal data safe from hackers, they’ll have to turn to someone who can. It takes less than a second to lose a customer, and bad press is viral. These days, a DDoS attack is more than just a public embarrassment — it can permanently damage your reputation and your customer relationships.
As organized attacks become more sophisticated and effective, and networks and server capabilities grow, companies need to become more savvy about how to protect themselves and their assets. Organizations are more vulnerable than they may realize with multiple entry points that are their Achilles heels — heating and cooling systems, printers, thermostats, videoconferencing, even vending machines. Companies need to stay one step ahead of these cybercriminals as they continue to get smarter and more strategic.
Organizations with significant Web presences cannot sustain DDoS attacks without repercussions to their brand and bottom line. You need to determine the risk of a potential attack and identify what you need to protect. Ideally you need a solution that will allow you to detect anomalies in network patterns in real time and be alerted to unusually high levels of incoming connections from one or more sources. And to be really secure, you need to provision your system for a one-terabit attack.
Tweet: While DDoS mitigation management can seem like a bit of a black art, it follows standard incident handling procedures.
Besides defending your own organization from a DDoS attack, it’s also important that you behave like a “good Internet neighbor.” By deploying the proper visibility solutions that enable you to detect whether your systems are being used in a DDoS attack against another victim, you can take responsibility for helping to shut down a DDoS attack at its source.
These 10 tips provide a basic guideline for considering different security solutions for your organization. It’s important to understand the potential threats first so you can make the right decision about how best to protect your employees, your company secrets and your valued customer relationships.
(1) Preimesberger, Chris; “DDoS Attack Volume Escalates as New Methods Emerge.” eWeek. May 28, 2014.
(2) Kovacs, Eduard; “DDoS Attacks Shorter, Repeated Frequently in 1H 2014: Report.” Security Week. Sept. 24, 2014.
(3) “A DDoS Attack Could Cost $1 Million Before Mitigation Even Starts.” Infosecurity. Oct. 24, 2013.
We proudly present our next release of FlowTraq: 17.2 – this release breaks our age-old numbering system, called the “Q-releases”. In the past we would work hard each quarter to add great features to the FlowTraq product, and title the release with the quarter in which it was built (Q1/16, Q2/16, etc.) Now, we work even harder, and bring you a great update each quarter as well, except we number the release by the month of general availability: FlowTraq 17.2 is the February release of 2017. There’s a lot of great stuff in this one:
The power of FlowTraq delivered to your inbox, daily, weekly, or monthly. Comprehensive analytic reports delivered via email. The reports provide summaries of network volume, interface intel and alert highlights. Top-N detail on Traffic Groups and Interfaces is included, with direct links to FlowTraq for deeper visibility and examination:
RadWare’s DefensePro is the industry leading DDoS traffic scrubbing solution. FlowTraq analyzes how each DDoS attack impacts your network, and what paths are affected. Then traffic is intelligently re-routed through the DefensePro system to clean the attack traffic. FlowTraq 17.2 can detect DDoS attacks and manage mitigation through the DefensePro in complex carrier, ISP, and datacenter environments, as well as the enterprise.
FlowTraq 17.2 includes automatic clocks-skew correction updates the timestamps in flow records from exporters with improperly configured clocks, avoiding traffic that appears in the future, or far in the past.
Save workspaces to HTML to share and store key parts of sensitive security investigations
Improved multi-Dashboard interface
Improved response time on Quickview, Policy, and Alert pages for a better end-user experience
Rejection of obviously-bad flow records brings better data and more accuracy. One less concern for the operator.
Fixed issue affecting total sources count in some DDoS alerts
Fixed memory leak observed by some users on the Dashboard
Fixed line duplication in alert panel
Fixed server issue when exporting syslog to an unresolvable address
Hackers are generating unprecedented power in every sense of the phrase. The power for censorship. The power for destruction. The power for terrorism. And each of these are a direct result of the power they harness from computers and connected devices.
As DDoS threats consume the minds of information security technologists, most frequently discussed are mitigating the common attacks – DNS reflection, NTP amplification, UPNP attacks. What isn’t immediately considered however, is the much larger tectonic shifts that are driving these attacks. We discuss 3 factors.
In recent months, we’ve witnessed a surge in attacks generating enough bandwidth to immobilize major institutions. Dyn, Krebs-on-security, and the Olympic games have been targets for disruption – with many of these services crumbling under the onslaught of the DDoS attacks.
Increased security awareness has made it harder to collect the big networks of zombies needed to perpetrate these attacks. Attackers are flexing their creative muscles, and veering away from computers to launch their attacks. Rather, hackers are now hitching a ride on coattails of the IoT to generate the intensity needed to slow servers to a halt. Smart technology meets smarter hackers.
Society is increasing connectivity using arbitrary IoT gadgets, and in turn, making sizable attacks (of a Terabit per second or more) much more feasible – a yin and yang of innovation gone wrong if you will. The acquisition of data for cyber terrorism can now be found in anything (well, almost). CCTVs for example, played an integral role in DDoS threats as of late. The ability to stream large amounts of data for an extended period of time, virtually undetected, allowed the attack to circumvent security protocols and fester from within.
As firewall technology struggles to adapt, information from every day mechanicals are vulnerable, including gaming systems, pocket devices, even your smart fridge can now take down captains of industry and cause big headaches for cyber security professionals.
It is clear that the zombies needed for the botnets that cause the large amounts of attacks are fairly accessible, yet they still require some serious hacker skill to compromise. But, what happens when DDoS is up for grabs? Booters, found with a simple Google search, pose a major threat to big commerce and industry. Want to be a cyber-criminal? No problem. There’s an app for that! Well, maybe not an app per se, but there’s certainly no shortage of services online that will help you take down an arch nemesis or some unsuspecting website for money, no mad hacker skills required.
For a small fee, rentable DDoS services, with your choice of size and timeframe, can be yours for use in an attack. And just like that, you’re a cyber criminal. Coordinated zombie attacks are no longer exclusively for the tech savvy. With step by step instructions, right down to proverbial best practices, online Booters will have you slowing traffic and disrupting websites in no time. Sure, there are risks involved for the would-be-criminal, but it poses an even greater risk to companies at the receiving end of those wielding this power with little to no altruism.
The Internet is complex and very resilient. But where there is structure, there is weakness. How long before a well-orchestrated attack impacts not just one service, but all of us?
DDoS threats have become so destructive that targeting a specific service now typically has repercussions for adjacent or related services. Call them “innocent bystanders”. As powerful, lengthy attacks become more prevalent, technologists are noticing the collateral damage of systems housed next to the bullseye, or relying on the bullseye.
Renowned security technologist, Bruce Schneier, authored the blog post “Someone is Learning How to Take Down the Internet” where he discusses profiling attacks and probing – “extensively testing the core defensive capabilities of the companies that provide critical Internet services.” In essence, he eludes to a suspected mapping by an educated cyber criminal. One who is obtaining a dangerously solid understanding of what they are up against to create something extremely powerful.
Another troubling fact is that an entire arsenal of defensive cyber security technology slowly becomes irrelevant as new attack innovation is introduced. Your best defense is adaptability. And to be able to effectively adapt, you must understand the threat you are up against.
Cyber security professionals are on the front lines of the DDoS threat mitigation battle. The ability to be proactive, as well as reactive, are a company’s best defense to withstand attacks and avoid service outages. Visibility is your best defense. FlowTraq is a cyber security technologist’s primary weapon of choice
FlowTraq is a DDoS Mitigation Management tool that automatically responds to DDoS attacks in seconds. Thanks to integration with dozens of scrubbing and mitigation vendors, FlowTraq is able to automatically pick the best mitigation approach for each attack, maximizing mitigation effectiveness, and minimizing your cost-to-mitigation.
Hackers like to collect vast armies of zombies, called “botnets” that they use to launch DDoS attacks. Get 10 tips you can use today to prevent and eradicate zombies on your network and prevent DDoS attacks.
Information security professionals need to protect against many different kinds of risks. But the insider threat may be the most challenging. Because insider threats come from within the organization and are by definition trusted, they can be particularly difficult to detect and block.
An insider threat could be an employee, but it could also come from a partner, contractor, or other trusted third party that has access to non-publically available information or systems. Edward Snowden is the most high-profile example of a trusted insider, in this case a sub-contractor, exploiting his privileged access to steal millions of documents. He was an “intentional” insider threat, which is an insider who deliberately takes advantage of their access. Intentional insider threats could be disgruntled employees who want to hurt the organization, or they could be coerced, for example via blackmail, by outsiders looking to gain access.
But an insider threat could also come from an untrusted individual who gains access using false or stolen credentials so they appear to be a legitimate (i.e., trusted) user. These outsiders are usually able to get access thanks to “unintentional” insider threats – careless insiders who either are phished to unwittingly hand over their credentials or who launch malware on their trusted endpoint systems that enables outsiders to get in. This is what happened with the Office of Personnel Mangement (OPM) data breach; according to investigators, hackers likely stole credentials to get access to the network and then planted malware to create a backdoor to exfiltrate data.
In its 2015 Insider Threat Report, Vormetric found that a staggering 93% of U.S. respondents feel their organization is vulnerable to an insider attack. And when asked about who posed the biggest internal threat to corporate data, over half (55%) said privileged users. And Infosecurity Magazine reported that insiders were responsible for 43% of data breaches – and that the split between intentional and accidental leakage was 50/50. Clearly, this is a big problem that’s keeping infosec professionals up at night.
The implications of insider threats are significant. In its Managing Insider Threats report, PwC states that 39% of organizations say that insider crimes are more costly or damaging than incidents perpetrated by external adversaries. And the Vormetric survey shows that the average breach detection time isn’t measured in minutes, hours, or even days, but in months.
The first, and most obvious, reason that insider threats are so difficult to detect is that, by definition, the activity looks it’s coming from (and may, in fact, be coming from) legitimate users. But that’s not the only reason. Many security precautions focus on the perimeter and don’t have visibility into what’s going in within the network. There are a number of user-focused security tools, but these tend to concentrate on endpoints despite the fact that corporate servers and databases pose the highest risk to insider attack.
Not only are insider threats more challenging to detect than other information security attacks, they also have a different mitigation profile. While all network security and data breaches have implications for the organization as a whole, mitigation of insider threats requires the active participation of a number of functions in addition to IT and information security, including corporate security, human resources, legal, audit, etc.
A report issued by the Intelligence and National Security Alliance (INSA) Cyber Insider Threat Subcouncil – an organization that engages with the intelligence community, the DOD, CIOs, and representatives from private sector companies to examine best practices around insider threats – advises that, “a robust insider threat program integrates and analyzes technical and nontechnical indicators to provide a holistic view of an organization’s insider threat risk from individuals identified as potential threats.”
Following are some of the technical solutions organizations have available to them to detect and/or prevent insider threats.
Access control mechanisms – Access control is a broad term that encompasses a wide range of technologies and approaches to ensure that data and systems are only accessed by authorized users. This includes authorization (the process of determining what users can access, what they can do with it, and when access is allowed) and authentication (the process of making users prove they are who they claim to be). Access controls can restrict access to digital systems (for example, with passwords) or physical locations (such as with keycards). Access controls are an important security layer in every organization, but they don’t protect against unauthorized users with stolen credentials.
Encryption – Encryption doesn’t prevent data from being stolen, but it can prevent attackers from using that data, which decreases its value to would-be perpetrators. There are different types of data encryption and it’s important to understand the difference. You can encrypt data-at-rest, but be sure that the encryption keys are not stored on the same server as the data, and you can also encrypt data-in-motion.
Data loss prevention (DLP) systems – DLP systems are designed to detect potential data exfiltration, but they are limited in their effectiveness. While they are good at identifying sensitive structured data – such as social security or credit card numbers – leaving the organization, they can’t recognize many other types of data exfiltration, such as confidential intellectual property or other insider information that could have serious business, regulatory, ethical or legal repercussions.
User training – Because humans are the weakest link in the insider threat attack vector, training is an important tool, especially to help prevent unintentional insider threats. By educating users to recognize phishing attempts, malware, and other techniques hackers use to make insiders their unwitting accomplices, you can lessen the risk. Of course, people being people, you can never completely eliminate the threat of careless or unaware insider threats.
Monitoring of data-in-motion – Data can’t be used by attackers unless it leaves the network. If all other protections fail, monitoring data-in-motion can help identify a data leak in progress so you can stop it quickly, before significant damage is done. There are many approaches for monitoring data-in-motion, but the most comprehensive – and effective – is network flow monitoring. Network flow data – NetFlow, Jflow, sFlow, Cflow and IPFIX – contains valuable information about traffic traversing the network, such as IP addresses, port and protocol, exporting device, timestamps, VLAN and TCP flags, etc. Solutions that analyze this data can learn and understand the changing patterns of behavior inside your network so when any system, mobile device, or server starts behaving outside the normally expected patterns – such as hosts receiving data outside of normal thresholds or sending files at unusual times – you can quickly shut them down.
Unfortunately, there’s no way to completely prevent network security breaches, including from insider threats. So in addition to employing a range of prevention and detection tools and approaches such as those discussed above, you also need to ensure that you have comprehensive forensic capabilities so that if you do suffer an attack, you can quickly evaluate the extent of the compromise. You need to be able to answer the following questions:
Full-fidelity network flow analysis tools – in addition to helping you detect a data breach in progress – can also be invaluable in your post-breach investigations. They can provide a complete forensic history of the data-in-motion, showing you when data traveled and where it traveled to and from. Because flow data is compact but provides a great amount of detail, it’s a powerful tool for quickly answering the key questions about what happened during an insider attack.
The hard truth is that organizations can’t afford to trust their “trusted” insiders. The insider threat risk is too big and the stakes are simply too high. Every organization has sensitive and confidential data it needs to protect, which means that every organization needs to have a security infrastructure in place to protect against insider threats, detect when a breach has occurred, and investigate to determine what happened, when, and for how long.
FlowTraq monitors network flow traffic in real time and immediately detects unusual behavior and deviations from “normal” patterns that indicate insider threats and other unwanted behavior on the network so you can act fast. FlowTraq detects and alerts on insider threats, so you can begin mitigation immediately. In addition to helping you defend against insider threats and data exfiltration, FlowTraq can detect a wide range of other undesirable behavior in the network, including DDoS and brute-force attacks, scanning, and more.