Archives

Category Archive for ‘Traffic Accounting’

3 Key Differences Between NetFlow and Deep Packet Inspection (DPI) Packet Capture Monitoring

The increasing density, complexity and expanse of modern networking environments have fueled the ongoing debate around which network analysis and monitoring tools serve the needs of the modern engineer best – placing Packet Capture and NetFlow Analysis to perform NDR at center-stage of the conversation.

Granted, when performing analysis of unencrypted traffic both can be extremely valuable tools in ongoing efforts to maintain and optimize complex environments, but as an engineer, I tend to focus on solutions that give me the insights I need without too much cost on my resources, while complementing my team’s ability to maintain and optimize the environments we support.

So with this in mind, let’s take a look at how NetFlow, in the context of the highly-dense networks we find today, delivers three key requirements network teams rely on for reliable end-to-end performance monitoring of their environments.

A NetFlow deployment won’t drain your resources

Packet Capture, also known as Deep Packet Inspection (DPI), once rich in network metrics has finally failed due to encryption and a segment based approach making it expensive to deploy and maintain. It requires a requires sniffing devices and agents throughout the network, which invariably require a huge of maintenance during their lifespan.

In addition, the amount of space required to store and analyze packet data makes it an inefficient an inelegant method of monitoring or forensic analysis. Combine this with the levels of complexity networks can reach today, and overall cost and maintenance associated with DPI can quickly become unfeasible. In the case of NetFlow, its wide vendor support across virtually the entire networking landscape makes almost every switch, router, vmware, GCP cloud, Azure Cloud, AWS cloud, vmWare velocloud or firewall a NetFlow / IPFIX / sflow / ixflow “ready” device. Devices’ built-in readiness to capture and export data-rich metrics makes it easy for engineers to deploy and utilize . Also, thanks to its popularity, CySight’s NetFlow analyzer provides varying feature-sets with enriched vendor specific flow fields are available for security operations center (SOC) network operations center (NOC) teams to gain full advantage of data-rich packet flows.

Striking the balance between detail and context

Considering how network-dependent and widespread applications have become in recent years, NetFlow, IPFIX, sFlow and ixFlow’s  ability to provide WAN-wide metrics in near real-time makes it a  suitable troubleshooting companion for engineers. Add to this enriched context enables a very complete qualification of impact from standard traffic analysis perspective as well as End point Threat views and Machine Learning and AI-Diagnostics.

Latest Flow methods extend the wealth of information as it collects via a template-based collection scheme, it strikes the balance between detail and high-level insight without placing too much demand on networking hardware – which is something that can’t be said for Deep Packet Inspection. Netflow’s constant evolution alongside the networking landscape is seeing it used as a complement to solutions such as Cisco’s NBAR , Packet Brokers such as KeySight, Ixia, Gigamon, nProbe, NetQuest, Niagra Networks, CGS Tower Networks, nProbe and other Packet Broker solutions have recognized that all they need to export flexible enriched flow fields to reveal details at the packet level.

NetFlow places your environment in greater context

Context is a chief area where Granular NetFlow beats out Packet Capture since it allows engineers to quickly locate root causes relating to Cyber Security, Threat Hunting, Root Cause and Performance by providing a more situational view of the environment, its data-flows, bottleneck-prone segments, application behavior, device sessions and so on.

One could argue that Deep Packet Inspection (DPI) is able to provide much of this information too, but as networks today are over 98% encrypted even using certificates won’t give engineers the broader context around the information it presents, thus hamstringing IT teams from detecting anomalies that could be subscribed to a number of factors such as cyber threats, untimely system-wide application or operating system updates or a cross-link backup application pulling loads of data across the WAN during operational hours.

So does NetFlow make Deep Packet Inspection obsolete?

Both Deep Packet Inspection (DPI) and legacy Netflow Analyzers cannot scale in retention so when comparing those two genres of solutions the only win a low end netflow analyzer solution will have against a DPI solution is that DPI is segment based so flow solution is inherently better as its agentless.

However, using NetFlow to identify an attack profile or illicit traffic can only be attained when flow retention is deep (granular) . However, NetFlow strikes that perfect balance between detail and context and gives SOC’s and NOCs intelligent insights that reveals broader factors that can influence your network’s ability to perform.

Gartner’s assertion that a balance of 80% NetFlow monitoring  coupled with 20% Packet Capture as the perfect combination of performance monitoring is false due to encryption’s rise but it is correct to attest to NetFlow’s growing prominence as the monitoring tool of choice and as it and its various iterations such sFlow, IPFIX, ixFlow, Flow logs and  others flow protocols continue to expand the breadth of context it provides network engineers, that margin is set to increase in its favor as time.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Scalable NetFlow – 3 Key Questions to Ask Your NetFlow Vendor

Why is flows per second a flawed way to measure a netflow collector’s capability?

Flows-per-second is often considered the primary yardstick to measure the capability of a netflow analyzers flow capture (aka collection) rate.

This seems simple on its face. The more flows-per-second that a flow collector can consume, the more visibility it provides, right? Well, yes and no.

The Basics

NetFlow was originally conceived as a means to provide network professionals the data to make sense of the traffic on their network without having to resort to expensive per segment based packet sniffing tools.

A flow record contains at minimum the basic information pertaining to a transfer of data through a router, switch, firewall, packet tap or other network gateway. A typical flow record will contain at minimum: Source IP, Destination IP, Source Port, Destination Port, Protocol, Tos, Ingress Interface and Egress Interface. Flow records are exported to a flow collector where they are ingested and information orientated to the engineers purposes are displayed.

Measurement

Measurement has always been how the IT industry expresses power and competency. However, a formula used to reflect power and ability changes when a technology design undergoes a paradigm shift.

For example, when expressing how fast a computer is we used to measure the CPU clock speed. We believed that the higher the clock speed the more powerful the computer. However, when multi-core chips were introduced the CPU power and speed dropped but the CPU in fact became more powerful. The primary clock speed measurement indicator became secondary to the ability to multi-thread.

The flows-per-second yardstick is misleading as it incorrectly reflects the actual power and capability of a flow collector to capture and process flow data and it has become prone to marketing exaggeration.

Flow Capture Rate

Flow capture rate ability is difficult to measure and to quantify a products scalability. There are various factors that can dramatically impact the ability to collect flows and to retain sufficient flows to perform higher-end diagnostics.

Its important to look not just at flows-per-second but at the granularity retained per minute (flow retention rate), the speed and flexibility of alerting, reporting, forensic depth and diagnostics and the scalability when impacted by high-flow-variance, sudden-bursts, number of devices and interfaces, the speed of reporting over time, the ability to retain short-term and historical collections and the confluence of these factors as it pertains to scalability of the software as a whole.

Scalable NetFlow and flow retention rates are particularly critical to determine as appropriate granularity is needed to deliver the visibility required to perform Anomaly Detection, Network Forensics, Root Cause Analysis, Billing substantiation, Peering Analysis and Data retention compliance.

The higher the flows-per-second and the flow-variance the more challenging it becomes to achieve a high flow-retention-rate to archive and retain flow records in a data warehouse.

A vendors capability statement might reflect a high flows-per-second consumption ability but many flow software tools have retention rate limitations by design.

It can mean that irrespective of achieving a high flow collection rate the netflow analyzer might only be capable of physically archiving 500 flows per minute. Furthermore, these flows are usually the result of sorting the flow data by top bytes to identify Top 10bandwidth abusers. Netflow products of this kind can be easily identified because they often tend to offer benefits orientated primarily to identifying bandwidth abuse or network performance monitoring.

Identifying bandwidth abusers is of course a very important benefit of a netflow analyzer. However, it has a marginal benefit today where a large amount of the abuse and risk is caused by many small flows.

These small flows usually fall beneath the radar screen of many netflow analysis products.  Many abuses like DDoS, p2p, botnets and hacker or insider data exfiltration continue to occur and can at minimum impact the networking equipment and user experience. Lack of ability to quantify and understand small flows creates great risk leaving organizations exposed.

Scalability

This inability to scale in short-term or historical analysis severely impacts a flow monitoring products ability to collect and retain critical information required in todays world where copious data has created severe network blind spots.

To qualify if a tool is really suitable for the purpose, you need to know more about the flows-per-second collection formula being provided by the vendor and some deeper investigation should be carried out to qualify the claims.

 

With this in mind here are 3 key questions to ask your NetFlow vendor to understand what their collection scalability claims really mean:

  1. How many flows can be collected per second?

  • Qualify if the flows per second rate provided is a burst rate or a sustained rate.
  • Ask how the collection and retention rates might be affected if the flows have high-flow variance (e.g. a DDoS attack).
  • How is the collection, archiving and reporting impacted when flow variance is increased by adding many devices and interfaces and distinct IPv4/IPv6 conversations and test what degradation in speed can you expect after it has been recording for some time.
  • Ask how the collection and retention rates might change if adding additional fields or measurements to the flow template (e.g. MPLS, MAC Address, URL, Latency)
  • How many flow records can be retained per minute?

  • Ask how the actual number of records inserted into the data warehouse per minute can be verified for short-term and historical collection.
  • Ask what happens to the flows that were not retained.
  • Ask what the flow retention logic is. (e.g. Top Bytes, First N)
  • What information granularity is retained for both short-term and historically?
    • Does the datas time granularity degrade as the data ages e.g. 1 day data retained per minute, 2 days retained per hour 5 days retained per quarter
    • Can you control the granularity and if so for how long?

 

Remember – Rate of collection does not translate to information retention.

Do you know whats really stored in the software’s database? After all you can only analyze what has been retained (either in memory or on disk) and it is that information retention granularity that provides a flow products benefits.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

NetFlow for Usage-Based Billing and Peering Analysis

Usagebased billing refers to the methods of calculating and passing back the costs of running a network to the consumers of data that occur through the network. Both Internet Service Providers (ISP) and Corporations have a need for Usage-based billing with different billing models.

NetFlow is the ideal technology for usage-based billing because it allows for the capture of all transactional information pertaining to the usage and some smart NetFlow technologies already exist to assist in the counting, allocation, and substantiation of data usage.

Advances in telecommunication technology have enabled ISPs to offer more convenient, streamlined billing options to customers based on bandwidth usage.

One billing model used most commonly by ISPs in the USA is known as the 95th percentile. The ISP filters the samples and disregards the highest 5% in order to establish the bill amount. This is an advantage to data consumers who have bursts of traffic because they’re not financially penalized for exceeding a traffic threshold for brief periods of time. The solution measures traffic employing a five-minute granularity standard typically over the course of a month.

The disadvantage of the 95th percentile model is that its not sustainable business model as data continues to become a utility like electricity.

A second approach is a utility-based metered billing model that involves retaining a tally of all bytes consumed by a customer with some knowledge of data path to allow for premium or free traffic plans.

Metered Internet usage is used in countries like Australia and most recently Canada who have nationally moved away from a 95th percentile model. This approach is also very popular in corporations whose business units share common network infrastructure and who are unwilling to accept “per user” cost, but rather a real consumption-based cost.

Benefits of usage-based billing are:

  • Improved transparency about the cost of services;
  • Costs feedback to the originator;
  • Raised cost sensitivity;
  • Good basis for active cost management;
  • The basis for Internal and external benchmarking;
  • Clear substantiation to increase bandwidth costs;
  • Shared infrastructure costs can also be based on consumption;
  • Network performance improvements.

For corporations, usage-based billing enables the IT department to become a shared service and viewed as a profit center rather than a cost center. It can become viewed as something that’s a benefit and a catalyst for business growth rather than a necessary but expensive line item in the budget.

For ISPs in the USA, there is no doubt that utility-based costs per byte model will continually be contentious as video and TV over Internet usage increases. In other regions, new business models that include packaging of video over “free zones” services have become popular meaning that the cost of premium content provision has fallen onto the content provider making utility billing viable in the USA.

NetFlow tools can include methods for building billing reports and offer a variety of usage-based billing model calculations.

Some NetFlow tools even include an API to allow the chart-of-accounts to be retained and driven from traditional accounting systems using the NetFlow system to focus on the tallying. Grouping algorithms should be flexible within the solution to allow for grouping of all different variables such as interfaces, applications, Quality of Service (QoS), MAC Addresses, MPLS, and IP groups. For ISPs and large corporations Asynchronous Network Numbers (ASN) also allow for analysis of data-paths allowing sensible negotiations with Peering partners and Content partners.

Look out for more discussion on peering in an upcoming blog…

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

How to Improve Cyber Security with Advanced Netflow Network Forensics

Most organizations today deploy network security tools that are built to perform limited prevention – traditionally “blocking and tackling” at the edge of a network using a firewall or by installing security software on every system.

This is only one third of a security solution, and has become the least effective measure.

The growing complexity of the IT infrastructure is the major challenge faced by existing network security tools. The major forces impacting current network security tools are the rising level of sophistication of cybercrimes, growing compliance and regulatory mandates, expanding virtualization of servers and the constant need for visibility compounded by ever-increasing data volumes. Larger networks involve enormous amounts of data, into which the incident teams must have a high degree of visibility for analysis and reporting purposes.

An organization’s network and security teams are faced with increasing complexities, including network convergence, increased data and flow volumes, intensifying security threats, government compliance issues, rising costs and network performance demands.

With network visibility and traceability also top priorities, companies must look to security network forensics to gain insight and uncover issues. The speed with which an organization can identify, diagnose, analyze, and respond to an incident will limit the damage and lower the cost of recovery.

Analysts are better positioned to mitigate risk to the network and its data through security focused network forensics applied at the granular level. Only with sufficient granularity and historic visibility and tools that are able to machine learn from the network Big Data can the risk of an anomaly be properly diagnosed and mitigated.

Doing so helps staff identify breaches that occur in real-time, as well as Insider threats and data leaks that take place over a prolonged period. Insider threats are one of the most difficult to detect and are missed by most security tools.

Many network and security professionals assume that they can simply analyze data captured using their standard security devices like firewalls and intrusion detection systems, however they quickly discover limitations as these devices are not designed for and cannot record and report on every transaction due to lack of deep visibility, scalability and historic data retention making old fashioned network forensic reporting expensive and impractical.

NetFlow is an analytics software technology that enables IT departments to accurately audit network data and host-level activity. It enhances network security and performance making it easy to identify suspicious user behaviors to protect your entire infrastructure.

A well-designed NetFlow forensic tool should include powerful features that can allow for:

  • Micro-level data recording to assist in identification of real-time breaches and data leaks;
  • Event notifications and alerts for network administrators when irregular traffic movements are detected;
  • Tools that highlight trends and baselines, so IT staff can provision services accordingly;
  • Tools that learn normal behavior, so Network Security staff can quickly detect and mitigate threats;
  • Capture highly granular traffic over time to enable deep visibility across the entire network infrastructure;
  • 24-7 automation, flexible reporting processes to deliver usable business intelligence and security forensics specifically for those analytics that can take a long time to produce.

Forensic analysts require both high-level and detailed visibility through aggregating, division and drilldown algorithms such as:

  • Deviation / Outlier analysis
  • Bi-directional analysis
  • Cross section analysis
  • Top X/Y analysis
  • Dissemination analysis
  • Custom Group analysis
  • Baselining analysis
  • Percentile analysis
  • QoS analysis
  • Packet Size analysis
  • Count analysis
  • Latency and RTT analysis

Further when integrated with a visual analytics process it will enable additional insights to the forensic professional when analyzing subsets of the flow data surrounding an event.

In some ways it needs to act as a log analyzer, security information and event management (SIEM) and a network behavior anomaly and threat detector all rolled into one.

The ultimate goal is to deploy a multi-faceted flow-analytics solution that can compliment your business by providing extreme visibility and eliminating network blindspots, both in your physical infrastructure and in the cloud, automatically detecting and diagnosing your entire network for anomalous traffic and improving your mean time to detect and repair.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

5 Perks of Network Performance Management

Network performance management is something that virtually every business needs, but not something that every business is actively doing, or even aware of.  And why should they?

While understanding the technical side of things is best left to the IT department, understanding the benefits of a properly managed network is something that will help get the business managers on board, especially when good performance management solutions might be a cost that hadn’t been considered.  So what are the benefits?

1.  Avoiding downtime – Downtime across an entire network is going to be rare, but downtime in small areas of the network are possible if it gets overloaded.  Downtime of any kind is just not something that business can tolerate, for a few reasons:

  • it leaves that area of the network unmonitored, which is a serious security issue
  • shared files won’t be accessible, nor will they be updating as users save the files.  This will lead to multiple versions of the same file, and quite a few headaches when the network is accessible again
  • downtime that affects customers is even worse, and can result in lost revenue or negative customer experiences

2.  Network speed – This is one of the most important and easily quantified aspects of managing netflow.  It will affect every user on the network constantly, and anything that slows down users means either more work hours or delays.  Obviously, neither of these is a good problem to have.  Whether it’s uploading a file, sending a file to a coworker, or sending a file to a client; speed is of paramount importance.

3.  Scalability – Almost every business wants to grow, and nowhere is that more true than the tech sector.  As the business grows, the network will have to grow with it to support more employees and clients.  By managing the performance of the network, it is very easy to see when or where it is being stretched too thin or overwhelmed.  As performance degrades, it’s very easy to set thresholds that show when the network need upgraded or enlarged.

4.  Security – Arguably the most important aspect of network management, even though it might not be thought of as a performance aspect.  An unsecured network is worse than a useless network, and data breaches can ruin a company.  So how does this play into performance management?

By monitoring netflow performance, it’s easy to see where the most resources are being used.  Many security attacks drain resources, so if there are resource spikes in unusual areas it can point to a security flaw.  With proper software, these issues can be not only monitored, but also recorded and corrected.

5.  Usability – Unfortunately, not all employees have a working knowledge of how networks operate.  In fact, as many in IT support will attest, most employees aren’t tech savvy.  However, most employees will need to use the network as part of their daily work.  This conflict is why usability is so important.  The easiest way to minimize training costs with any network management program is to ensure that it is as user-friendly as possible.

The fanciest, most impressive network performance management system isn’t worth anything if no one knows how to use and optimize it properly.  Even if the IT department has no issues with it, the reports and general information should be as easy to decipher as possible.

Is your network as optimized as it could be?  Are you able to monitor the network’s performance and flow,  or do network forensics to determine where issues are?  Don’t try to tackle all of this on your own, contact us and let us help you support your business with the best network monitoring for your specific needs.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

5 Benefits of NetFlow Performance Monitoring

NetFlow can provide the visibility that reduces the risks to business continuity associated with poor performance, downtime and security threats. As organizations continue to rely more and more on computing power to run their business, transparency of business applications across the network and interactions with external systems and users is critical and NetFlow monitoring simplifies detection, diagnosis, and resolution of network issues.

Traditional SNMP tools are too limited in their collection capability and the data extracted is very coarse and does not provide sufficient visibility. Packet capture tools can extract high granularity from network traffic but are dedicated to a port mirror or a per segment capture and are built for specialized short term captures rather than long historical analyses as well as packet based Predictive AI Baselining analytics tools traditionally do not provide the ability to perform forensic analysis of various sub-sections or aggregated views and therefore also suffer from a lack of visibility and context.

By analyzing NetFlow data, a network engineer can discover the root cause of congestion and find out the users, applications and protocols that are causing interfaces to exceed utilization. They can determine the Type or Class of Service (QoS) and group with application mapping such as Network Based Application Recognition (NBAR) to identify the type of traffic such as when peer-to-peer traffic tries to hide itself by using web services and so on.

NetFlow allows granular and accurate traffic measurements as well as high-level aggregated traffic collection that can assist in identifying excessive bandwidth utilization or unexpected application traffic. NetFlow enables networks to perform IP traffic flow analyses without deploying external probes, making traffic Predictive AI Baselining analytics cheap to deploy even on large network environments.

Understanding the benefits of a properly managed network might be a cost that hasn’t been considered by business managers.  So what are the benefits of NetFlow performance monitoring?

Avoiding Downtime

Service outages can cause simple frustrations such as issues when saving opened files to a suddenly inaccessible server or cause delays to business process can impact revenue. Downtime that affects customers can result in negative customer experiences and lost revenue.

Network Speed

Slow network links cause severe user frustration that impacts productivity and causes delays.

Scalability

As the business grows, the network needs to grow with it to support more computing processes, employees and clients. As performance degrades, it can be easy to set thresholds that show when, where and why the network is exceeding it’s capability. By having historical performance of the network, it is very easy to see when or where it is being stretched too thin or overwhelmed and to easily substantiate upgrades.

Security

Security Predictive AI Baselining analytics is arguably the most important aspect of network management, even though it might not be thought of as a performance aspect. By monitoring NetFlow performance, it’s easy to see where the most resources are being used. Many security attacks drain resources, so if there are resource spikes in unusual areas it can point to a security flaw. With advanced NetFlow diagnostics software, these issues can be not only monitored, but also recorded and corrected.

Peering

NetFlow Predictive AI Baselining analytics allows organizations to embrace progressive IT trends to analyze peering paths, or virtualized paths such as Multiprotocol Label Switching (MPLS), Virtual Local Area Networks (VLAN), make use of IPv6, Next Hops, BGP, MAC Addresses or link internal IP’s to Network Address Translated (NAT) IP addresses.

MPLS creates new challenges when it comes to network performance monitoring and security with MPLS as communication can occur directly without passing through an Intrusion Detection System (IDS). Organizations have two options to address the issue: Implement a sensor or probe at each site, which is costly and fraught with management hassles; or turn on NetFlow at each MPLS site and send MPLS flows to a NetFlow collector that becomes a very cost-effective option because it can make use of existing network infrastructure.

With NetFlow, network performance issues are easily resolved because engineers are given in-depth visibility that enables troubleshooting – all the way down the network to a specific user. The solution starts by presenting the interfaces being used.

An IT engineer can click on the interfaces to determine the root cause of a usage spike, as well as the actual conversation and host pairs that are dominating bandwidth and the associated user information. Then, all it takes is a quick phone call to the end-user, instructing them to shut down resource-hogging activities, as they’re impairing network performance.

IT departments are tasked with responding to end-user complaints, which often relate to network sluggishness. However, many times an end-user’s experience has more to do with an application or server failing to respond within the expected time period that can point to a Latency, Quality of Service (QoS) or server issue. Traffic Latency can be analyzed if supported in the NetFlow version. Statistics such as Round Trip Time (RTT) and Server Response Time (SRT) can be diagnosed to identify whether a network problem is due to an application or service experiencing slow response times.

Baselines of RTT and SRT can be learned to determine whether response time is consistent or is spiking over a time period. By observing the Latency over time alarms can be generated when Latency increases above the normal threshold.

TCP Congestion flags can also serve as an early warning system where Latency is unavailable.

NetFlow application performance monitoring provides the context around an event or anomaly within a host to quickly identify the root cause of issues and improve Mean Time To Know (MTTK) and Mean Time To Repair (MTTR).

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

What is NetFlow & How Can Organizations Leverage It?

NetFlow is a feature originally introduced on Cisco devices (but now generally available on many vendor devices) which provides the ability for an organization to monitor and collect IP network traffic entering or exiting an interface.
Through analysis of the data provided by NetFlow, a network administrator is able to detect things such as the source and destination of traffic, class of service, and the causes of congestion on the network.

NetFlow is designed to be utilized either from the software built into a router/switch or from external probes.

The purpose of NetFlow is to provide an organization with information about network traffic flow, both into and out of the device, by analyzing the first packet of a flow and using that packet as the standard for the rest of the flow. It has two variants which are designed to allow for more flexibility when it comes to implementing NetFlow on a network.

NetFlow was originally developed by Cisco around 1990 as a packet switching technology for Cisco routers and implemented in IOS 11.x.

The concept was that instead of having to inspect each packet in a “flow”, the device need only to inspect the first packet and create a “NetFlow switching record” or alternatively named “route cache record”.

After that that record was created, further packets in the same flow would not need to be inspected; they could just be forwarded based on the determination from the first packet. While this idea was forward thinking, it had many drawbacks which made it unsuitable for larger internet backbone routers.

In the end, Cisco abandoned that form of traffic routing in favor of “Cisco Express Forwarding”.

However, Cisco (and others) realized that by collecting and storing / forwarding that “flow data” they could offer insight into the traffic that was traversing the device interfaces.

At the time, the only way to see any information about what IP addresses or application ports were “inside” the traffic was to deploy packet sniffing systems which would sit inline (or connected to SPAN/Mirror) ports and “sniff” the traffic.  This can be an expensive and sometimes difficult solution to deploy.

Instead, by exporting the NetFlow data to an application which could store / process / display the information, network managers could now see many of the key meta-data aspects of traffic without having to deploy the “sniffer” probes.

Routers and switches which are NetFlow-capable are able to collect the IP traffic statistics at all interfaces on which NetFlow is enabled. This information is then exported as NetFlow records to a NetFlow collector, which is typically a server doing the traffic analysis.

There are two main NetFlow variants: Security Event Logging and Standalone Probe-Based Monitoring.

Security Event Logging was introduced on the Cisco ASA 5580 products and utilizes NetFlow v9 fields and templates. It delivers security telemetry in high performance environments and offers the same level of detail in logged events as syslog.

Standalone Probe-Based Monitoring is an alternative to flow collection from routers and switches and uses NetFlow probes, allowing NetFlow to overcome some of the limitations of router-based monitoring. Dedicated probes allow for easier implementation of NetFlow monitoring, but probes must be placed at each link to be observed and probes will not report separate input and output as a router will.

An organization or company may implement NetFlow by utilizing a NetFlow-capable device. However, they may wish to use one of the variants for a more flexible experience.

By using NetFlow, an organization will have insight into the traffic on its network, which may be used to find sources of congestion and improve network traffic flow so that the network is utilized to its full capability.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Seven Reasons To Analyze Network Traffic With NetFlow

NetFlow allows you to keep an eye on traffic and transactions that occur on your network. NetFlow can detect unusual traffic, a request for a malicious destination or a download of a larger file. NetFlow analysis helps you see what users are doing, gives you an idea of how your bandwidth is used and can help you improve your network besides protecting you from a number of attacks.

There are many reasons to analyze network traffic with NetFlow, including making your system more efficient as well as keeping it safe. Here are some of the reasons behind many organizations  adoption of NetFlow analysis:

  • Analyze all your network NetFlow allows you to keep track of all the connections occurring on your network, including the ones hidden by a rootkit. You can review all the ports and external hosts an IP address connected to within a specific period of time. You can also collect data to get an overview of how your network is used.

 

  • Track bandwidth use. You can use NetFlow to track bandwidth use and see reports on the average use of This can help you determine when spikes are likely to occur so that you can plan accordingly. Tracking bandwidth allows you to better understand traffic patterns and this information can be used to identify any unusual traffic patterns. You can also easily identify unusual surges caused by a user downloading a large file or by a DDoS attack.

 

  • Keep your network safe from DDoS attacks. These attacks target your network by overloading your servers with more traffic than they can handle. NetFlow can detect this type of unusual surge in traffic as well as identify the botnet that is controlling the attack and the infected computers following the botnet’s order and sending traffic to your network. You can easily block the botnet and the network of infected computers to prevent future attacks besides stopping the attack in progress.

 

  • Protect your network from malware. Even the safest network can still be exposed to malware via users connecting from home or via people bringing their mobile device to work. A bot present on a home computer or on a Smartphone could access your network but NetFlow will detect this type of abnormal traffic and with auto-mitigation tools automatically block it.
  • Optimize your cloud. By tracking bandwidth use, NetFlow can show you which applications slow down your cloud and give you an overview of how your cloud is used. You can also track performances to optimize your cloud and make sure your cloud service provider is offering a cloud solution that corresponds to what they advertised.
  • Monitor users. Everyone brings their own Smartphone to work nowadays and might use it for purposes other than work. Company data may be accessible by insiders who have legitimate access but have an inappropriate agenda downloading and sharing sensitive data with outside sources. You can keep track of how much bandwidth is used for data leakage or personal activities, such as using Facebook during work hours.
  • Data Retention Compliance. NetFlow can fill in the gaps where other technologies cannot deliver. A well-architected NetFlow solution can help business and service providers to achieve and maintain data retention compliance for a wide range of government and industry regulations.

NetFlow is an easy way to monitor your network and provides you with several advantages, including making your network safer and collecting the data you need to optimize it. Having access to a comprehensive overview of your network from a single pane of glass makes monitoring your network easy and enables you to check what is going on with your network with a simple glance.

CySight solutions takes the extra step to make life far easier for the network and security professional with smart alerts, actionable network intelligence, scalability and automated diagnostics and mitigation for a complete technology package.

CySight can provide you with the right tools to analyze traffic, monitor your network, protect it and optimize it. Contact us  to learn more about NetFlow and how you can get the most out of this amazing tool.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

3 Ways Anomaly Detection Enhances Network Monitoring

With the increasing abstraction of IT services beyond the traditional server room computing environments have evolved to be more efficient and also far more complex. Virtualization, mobile device technology, hosted infrastructure, Internet ubiquity and a host of other technologies are redefining the IT landscape.

From a cybersecurity standpoint, the question is how to best to manage the growing complexity of environments and changes in network behavior with every introduction of new technology.

In this blog, we’ll take a look at how anomaly detection-based systems are adding an invaluable weapon to Security Analysts’ arsenal in the battle against known – and unknown – security risks that threaten the stability of today’s complex enterprise environments.

Put your network traffic behavior into perspective

By continually analyzing traffic patterns at various intersections and time frames, performance and security baselines can be established, against which potential malicious activity is monitored and managed. But with large swathes of data traversing the average enterprise environment at any given moment, detecting abnormal network behavior can be difficult.

Through filtering techniques and algorithms based on live and historical data analysis, anomaly detection systems are capable of detecting even the most subtly crafted malicious software that may pose as normal network behavior. Also, anomaly-based systems employ machine-learning capabilities to learn about new traffic as it is introduced and provide greater context to how data traverses the wire, thus increasing its ability to identify security threats as they are introduced.

Netflow is a popular tool used in the collection of network traffic for building accurate performance and cybersecurity baselines with which to establish normal network activity patterns from potentially alarming network behavior.

Anomaly detection places Security Analysts on the front foot

An anomaly is defined as an action or event that is outside of the norm. But when a definition of what is normal is absent, loopholes can easily be exploited. This is often the case with signature-based detection systems that rely on a database of pre-determined virus signatures that are based on known threats. In the event of a new and yet unknown security threat, signature-based systems are only as effective as their ability to respond to, analyze and neutralize such new threats.

Since signatures do work well against known attacks, they are by no means paralyzed against defending your network. Signature-based systems lack the flexibility of anomaly-based systems in the sense that they are incapable of detecting new threats. This is one of the reasons signature-based systems are typically complemented by some iteration of a flow based anomaly detection system.

Anomaly based systems are designed to grow alongside your network

The chief strength behind anomaly detection systems is that they allow Network Operation Centers (NOCs) to adapt their security apparatus according to the demands of the day. With threats growing in number and sophistication, detection systems that can discover, learn about and provide preventative methodologies  are the ideal tools with which to combat the cybersecurity threats of tomorrow. NetFlow Anomaly detection with automated diagnostics does exactly this by employing machine learning techniques to network threat detection and in so doing, automating much of the detection aspect of security management while allowing Security Analysts to focus on the prevention aspect in their ongoing endeavors to secure their information and technological investments.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Two Ways Networks Are Transformed By NetFlow

According an article in techtarget.com “Your routers and switches can yield a mother lode of information about your network–if you know where to dig.”  The article goes on to say that excavating and searching through endless traffic data and logs manufactured by your network system is a lot like mining for gold, and punching random holes to look for a few nuggets of information isn’t very efficient. Your search will be much more fruitful if you know where to look and what it will look like. Fortunately, the data generated by a NetFlow traffic reporting protocol yields specific information and you can easily sort, view and analyze the information into what you want to use or need.In contemporary networks, there is a need to collect and retain a good set of traffic records for several different purposes. These include the ability to monitor traffic for network planning, security and analysis as well as track traffic usage for billing purposes. Every business experiences network problems. The goal is to transform these “bad behaving” networks by investigating the data that is being generated by the routers, switches and other hardware that make up the system.

  • Trace and repair network misconfigurations

Problems with networks can run the gamut from mismatched applications and hardware to wireless access points opened to accommodate BYOD users and other business uses. While there is always talk about software flaws and news about the latest internet threat, those things often distract IT pros from the real, every-day threat of unstable networks that have been configured to accommodate legacy hardware and a multitude of software applications.

The increasing complexity of the Internet itself, with the interconnection of lots of different devices and device types adds to the challenge of operating a computer network. Even though developing protocols to respond to unpredicted failures and misconfigurations is a workable solution, these out-of-date configurations can still cause frequent problems and denial of service (DOS). With many modern network devices monitoring functions and gathering data, retrieving and utilizing the NetFlow information makes tracing and repairing the problem of misconfigurations possible, easier and efficient.

  • Detect security breaches

There are many uses for NetFlow but one of the most important is the benefit of network security. This quote from an article by Wagner and Bernhard, describing worm and anomaly detection in fast IP networks, bears out the security problems facing governments, businesses, and internet users today.

“Large-scale network events, such as outbreaks of a fast Internet worm are difficult to detect in real-time from observed traffic, even more so if the observed network carries a large amount of traffic. Even during worm outbreaks, actual attack traffic is only a small fraction of the overall traffic. Its precise characteristics are usually unknown beforehand, making direct matching hard. It is desirable to derive parameters that describe large amounts of traffic data in such a way that details are hidden, but the type of changes associated with worm outbreaks and other network events are visible.”

NetFlow provides a 24/7 account of all network activity. There is an “unblinking” eye observing anything and everything that happens within the network boundaries. All the data needed to identify and enact a clean-up is recorded in the flow and this is invaluable to a security pro trying to reduce the impact of a breach in the network. NetFlow provides a visible, “what’s happening right now” view that other systems cannot provide. Most security systems alert after something has been detected, while NetFlow is constantly gathering information even when things seem to be just fine. In addition, NetFlow-based analysis relies on traffic behavior and algorithms which provides rapid detection of breaches that other technologies often miss

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

How Traffic Accounting Keeps You One Step Ahead Of The Competition

IT has steadily evolved from a service and operational delivery mechanism to a strategic business investment. Suffice it to say that the business world and technology have become so intertwined that it’s unsurprising many leading companies within their respective industries attribute their success largely to their adoptive stance toward innovation.

Network Managers know that much of their company’s ability to outmaneuver the competition depends to a large extent on IT Ops’ ability to deliver world-class services. This brings traffic accounting into the conversation, since a realistic and measured view of your current and future traffic flows is central to building an environment in which all the facets involved in its growth, stability and performance are continually addressed.

In this blog, we’ll take a look at how traffic accounting places your network operations center (NOC) team on the front-foot in their objective to optimize the flow of your business’ most precious cargo – its data.

All roads lead to performance baselining 

Performance baselines lay the foundation for network-wide traffic accounting against predetermined environment thresholds. They also aid IT Ops teams in planning for network growth and expansion undertakings. Baseline information typically contains statistics on network utilization, traffic components, conversation and address statistics, packet information and key device metrics.

It serves as your network’s barometer by informing you when anomalies such as excessive bandwidth consumption and other causes of bottlenecks occur. For example, root causes to performance issues can easily creep into an environment unnoticed: such as a recent update to a business critical application that may cause significant spikes in network utilization.  Armed with a comprehensive set of baseline statistics and data that allows Network Performance and Security Specialists to measure, compare and analyze network metrics,   root causes such as these can be identified with elevated efficiency.

In broader applications, baselining gives Network Engineers a high-level view of their environments, thereby allowing them to configure Quality of Service (QoS) parameters, plan for upgrades and expansions, detect and monitor trends and peering analysis and a bevy of other functions.

Traffic accounting brings your future network into focus

With new-generation technologies such as the cloud, resource virtualization, as a service platforms and mobility revolutionizing the networks of yesteryear, capacity planning has taken on a new level of significance. Network monitoring systems (NMS) need to meet the demands of the new, complex, hybrid systems that are the order of the day. Thankfully, technologies such as NetFlow have evolved steadily over the years to address the monitoring demands of modern networks. NetFlow accounting is a reliable way to peer through the wire and get a deeper insight to the traffic that traverses your environment. Many Network Engineers and Security Specialists will agree that their understanding of their environments hinges on the level of insight they glean from their monitoring solutions.

This makes NetFlow an ideal traffic accounting medium, since it easily collects and exports data from virtually any connected device for analysis by a CySight . The technology’s standing in the industry has made it the “go-to” solution for curating detailed, insightful and actionable metrics that move IT organizations from a reactive to proactive stance towards network optimization

Traffic accounting’s influence on business productivity and performance

As organizations become increasingly technology-centric in their business strategies, their reliance on networks that consistently perform at peak will increase accordingly. This places new pressures on Network Performance and Security Teams  to conduct iterative performance and capacity testing to contextualize their environment’s ability to perform when it matters most. NetFlow’s ability to provide contextual insights based on live and historic data means Network Operation Centers (NOCs)  are able to react to immediate performance hindrances and also predict with a fair level of accuracy what the challenges of tomorrow may hold. And this is worth gold in the context of the ever-changing and expanding networking landscape.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Integrated Cyber Network Intelligence: Your Network has been infiltrated. How do you know where and what else is impacted?

Why would you need Granular Network Intelligence?

“Advanced targeted attacks are set to render prevention-centric security strategies obsolete and that information must become the focal point for our information security strategies.” (Gartner)

In this webinar we take a look at the internal and external threat networks pervasive in todays enterprise and explore why organizations need granular network intelligence.

Webinar Transcription:

I’m one of the senior engineers here with CySight. I’ll be taking you through the webinar today. It should take about 30 to 40 minutes, I would say and then we will get to some questions towards the end. So let’s get started.

So the first big question here is, “Why would you need something like this? Why would you need Granular Network Intelligence?” And the answer, if not obvious already, is that, really, in today’s connected world, every incident response includes a communications component. What we mean by that is in a managed environment, whether it’s traditional network management or security management, anytime that there’s an alert or some sort of incident that needs to be responded to, a part of that response is always going to be communications, who’s talking to who, what did they do, how much bandwidth did they use, who did they talk to?

And in a security particular environment, we need to be looking at things like whether external threats or internal threats, was there a data breach, can I look at the historical behavior or patterns, can I put this traffic into context as per the sort of baseline of that traffic? So that insight into how systems have communicated is critical.

Just some background industry kind of information. According to Gartner, targeted attacks are set to render prevention-centric security strategies obsolete by 2020. Basically, what that means is there’s going to be a shift. They believe there’s going to be a shift to information and end-user-centric security focused on an infrastructure’s end-points and away from sort of the blocking and tackling of firewalls. They believe that there’ll be three big trends continuous compromise, meaning that an increased in level of advanced attacks, targeted attacks. It’s not going to stop. You’re never going to feel safe that someone won’t be potentially trying to attack you.

And most of those attacks will become financially motivated attacks, attempts to steal information and attempts to gather credit card data, if you have that, intellectual property, ransomware-type attacks. So this is not necessarily, “Hey, I’m just going to try and bring down your website or something,” in a traditional world where maybe people are playing around a little bit. This is more organized attacks specifically designed to either elicit a ransom or a reward or just steal information that could be turned into money out in a black market and it’s going to be more and more difficult for IT to have control over those end-user’s devices.

Again, very few organizations just have people sitting at their desks with desktop computers anymore. Everybody’s got laptops. Everybody’s got a phone or other tablet that’s moving around. People work from home. They work from the road. They’re connecting in to network resources from anywhere in the world at any time and it becomes more and more challenging for IT to sort of control those pathways of communications. So if you can’t control it, then you have to certainly be able to monitor it and react to it and the reaction is really in three major ways; determining the origin of the attack, the nature of the attack, and the damage incurred.

So we’re certainly assuming that there are going to be attacks, and we need to know where they’re coming from, what they’re trying to do, and have they been able to get there? You know, have we caught it in time or has something already been infected or has information been taken away from the network and that really leads us into this little graphic that we have about not being in denial. Understanding that, unfortunately, many people, in terms of their real visibility into the network, are somewhere in the blind or limited-type area. They don’t know what they don’t know, they think they should know but they don’t know, and etc.

But where they really need to be is at, “There’s nothing they don’t know.” And they need tools to be able to move them from wherever they are into this upper left-hand quadrant and certainly, that’s what our product is designed to do. So just kind of looking at the entire landscape of information flow from outside and inside and really understanding that there are new kinds of attacks, crawlers, botnets, ransomware, ToR, DoS and DDoS attacks that have been around for a while.

Your network may be used to download or host illicit material, leak intellectual property, be part of an attack, you know, something that’s command and controlled from somewhere else and your internal assets have become zombies and are being controlled by outside. There are lots of different threats. They’re all coming at you from all over the place. They’re all trying to get inside your network to do bad things and those attacks or that communication needs to be tracked.

Gartner also believes that 60% of enterprise security budgets will be allocated for rapid detection and response by 2020, up from less than 10% just a few years ago. What they believe is that too much of the spending has gone into prevention and not enough has gone into monitoring and response. So the prevention is that traditional firewalling, intrusion detection or intrusion prevention, things like that, which certainly is important. I’m not saying that those things aren’t useful or needed. But what we believe and what other industry analysts certainly believe is that that’s not enough, basically. There needs to be more than the simple sort of “Put up a wall around it and no one will be able to get in” kind of situation. If that were the case, then there would be no incidents anywhere because everybody’s got a firewall; large companies, small companies. Everybody’s got that today, and yet, you certainly don’t go more than a couple of days without hearing about new hacks, new incidents.

Here in the United States, we just came through an election where they’re still talking about people from other countries hacking into one party or another’s servers to try and change the election results. You know, on the enterprise side, there are lots and lots of businesses. Yahoo recently in the last couple of months certainly had a major attack that they had to come clean about it and of course both of those organizations, certainly Yahoo, you know, they’re an IT system. They have those standard intrusion prevention and firewall-type systems, but obviously, they aren’t enough.

So when you are breached, you need to be able to look and see what happened, “What can I still identify, what can I still control, and how do I get visibility as to what happened.” So for us, we believe that the information about the communication is the most important focal point for a security strategy and we can look at a few different ways to do that without a signature-based mechanism. So there’s ways to look at normal traffic and be able to very rapidly identify deviation from normal traffic. There’s ways to find outliers and repeat offenders. There’s ways to find nefarious traffic by correlating real-time threat feeds with current flows and we’re going to be talking about all of these today so that a security team can identify what was targeted, what was potentially compromised, what information may have left the building, so to speak.

There’s a lot of challenges faced by existing firewalls, SIEM, and loosely-coupled toolsets. The level of sophistication, it’s going up and up again. It’s becoming more organized. It’s an international crime syndicate with very, very intelligent people using these tactics to try and gain money. As we’ve talked about, blocking attack, laying end-point solutions are just not enough anymore and of course, there’s a huge cost in trying to deploy, trying to maintain multiple solutions.

So being able to try and have some tools that aren’t incredibly expensive, that do give you valuable information really, can become the best way to go. If you look at, say, what we’re calling sensors; packet captures, DPI-type systems. They, certainly, can do quite a lot, but they’re incredibly expensive to deploy across a large organization. If you’re trying to do packet capture, it’s very, very prohibitive. You can get a lot of detail, but trying to put those sensors everywhere is just… unless you’ve got an unlimited budget, and very few people do, that becomes a really difficult proposition to swallow.

But that doesn’t mean NetFlow can’t still use that kind of information. What we have found and what’s really been a major trend over the last couple of years is that existing vendors, on their devices, Check Point, Cisco, Palo Alto, packet brokers like Ixia, or all of the different people that you see up here, and more and more all the time, are actually adding that DPI information into their flow data. So it’s not separate from flow data. It’s these devices that have the packets going through them that can look at them all the way to layer seven and then include that information in the NetFlow export out to a product like ours that can collect it and display that.

So you can look into payload and classify according to payload content identifying traffic on port 80 or what have you, that you can connect the dots between inside and outside when there’s NAT. To be able to read the URLs and quickly analyze where they’re going and what they’re being used for. Getting specialized information like MAC address information or, if it’s a firewall, getting denial information or AAA information, if it’s a wireless LAN controller, getting SSID information, and other kinds of things that can be very useful to track down where people were talking.

So different types of systems are adding different kinds of information to the exports, but all of them, together, really effectively give you that same capability as if you had those sniffing products all over the place or packet capture products all over the place. But you can do it right in the devices, right from the manufacturer, send it through NetFlow, to us, and still get that quality information without having to spend so much money to do it.

The SANS organization, if you’re not familiar with them, great organization, provide a lot of good information and whitepapers and things like that. They have, very often, said that NetFlow might be the single most valuable source of evidence in network investigations of all sorts, security investigations, performance investigations, whatever it may be.

The NetFlow data can give you very high value intelligence about the communications. But the key is in understanding how to get it and how to use it. Some other benefits of using NetFlow, before packet capture is the lack of need for huge storage requirements. Certainly, as compared to traditional packet capture, NetFlow is much skinnier than that and you can store much longer-term information than you could if you had to store all of the packets. The cost, we’ve talked about.

And there are some interesting things like legal issues that are mitigated. If you are actually capturing all packets, then you may run into compliance issues for things like PCI or HIPAA. In certain different countries and jurisdictions around the world have very strict regulations about maintaining the end-data and keeping that data. NetFlow, you don’t have that. It’s metadata. Even with the new things that you can get, that we talked about a couple of slides ago, it’s still the metadata. It’s still data about the data. It’s not the actual end information. So even without that content, NetFlow still provides an excellent means of guiding the investigations, especially in an attack scenario.

So here, if you bundle everything that we’ve talked about so far into one kind of view and relate it to what we do here at CySight. You would see it on this screen. There are the end-users of people/content and things today, the Internet of things. So you’ve got data coming from security cameras and Internet-connected vehicles and refrigerators. It could be just about anything, environmental-type information. It’s all producing data. That data is traversing the network through multiple different types of platforms, or routers, switches, servers, wireless LAN controllers, cloud-based systems and so forth, all of which can provide correlation of the information and data. We call that the correlation API.

We then take that data into CySight. We combine it with outside big data, we’re going to talk about that in a minute, so not only the data of the connections but actual third-party information that we have related to known bad actors in the world and then we can use that information to provide you, the user, multiple benefits, whether it’s anomaly detection, threat intelligence, security performance, network accounting, all of the sort of standard things that you would do with NetFlow data.

And then lastly, integrate that data out to other third-party systems, whether it’s your managed service provider or security service provider. It could be upstream event collectors, trappers, log systems, SOAPA ecosystems, whether that’s on-premise or in the cloud or hybrid cloud. All of that is available via our product. So it starts at the traffic level. It goes through everything. It provides the data inside our product and as well as integrates out to third-party systems.

So let’s actually look into this a little more deeply. So the threat intelligence information is one of the two major components of our cyber security areas. One, the way this works is that threat data is derived from a large number of sources. So we maintain a list, effectively, a database of known bad IP addresses, known bad actors in the world. We collect that data through honeypots, and threat feeds, and crowd sources, and active crawlers, and our own internal user cyber feedback from our customers and all of that information combined allows us to maintain a very robust list of known bads, basically. Then we can combine that cyber intelligence data with the connection data, the flow data, the session data, inside and outside of your network, you know, the communications that you’re having, and compare the two.

So we have the big data threats. We can process that data along with what’s happening locally in your network to provide extreme visibility, to find who’s talking to who, what conversations are your users having with bad actors, ransomware, botnets, ToR, hacking, malware, whatever it may be and we then provide, of course, that information to you directly in the product. So we’re constantly monitoring for that communication and then we can help you identify it and remediate it as soon as possible.

As we look into this a little bit   zoomed in here a little bit, you can see that that threat information can be seen in summary or in detail. We have it categorized by different threat levels, types, severities, countries of origin, affected IPs, threat IPs. As anyone who’s used our product in the past knows, we always provide an extreme amount of flexibility to really slice and dice the data and give you a view into it in any way that is best consumed by you. So you can look at things by type, or by affected IP, or by threat IP, or by threat level, or whatever it may be and of course, no matter where you start, you can always drill in, you can filter, you can re-display things to show it in a different view.

Here’s an example of identifying some threat. These are ransomware threats, known ransomware IPs out there. I can very easily just right-click on that and say, “Show me the affected IP.” So I see that there’s ransomware. Who’s affected by that? Who is actually talking to that? And it’s going to drill right down into that affected IP or maybe multiple affected IPs that are known to be talking to those ransomware systems outside. You could see when it happened. You can see how much traffic.

Certainly, in this example our top affected IP here certainly has a tremendous amount of data, 307 megs over that time period, much more than the next ones below that and so that’s clearly one that needs to be identified or responded to very quickly. It can be useful to look at this way, to see if, “Hey,” you know, “Is this one system that’s been infiltrated or is it now starting to spread? Are there multiple systems? Where is it starting? Where is it going and how can I then sort of stem that tide?” It very easy to get that kind of information.

Here’s another example showing all ransomware attack, traffic, traversing a large ISP over a day. So whether you’re an end-user or certainly a service provider, we have many, many service provider customers that use this to monitor their customer’s traffic and so this could be something that you look at to say “Across all of my ISP, where is that ransomware traffic going? Maybe it’s not affecting me but it’s affecting one of my customers.” Then we can be able to drill into that and to alert and alarm on that, potentially block that right away as extra help to my customers.

Ransomware is certainly one of the most major scary sort of things that’s out there now. It’s happening every day. There are reports of police stations having to pay ransom to get their data back, hospitals having to pay ransom to get their data back. It’s kind of interesting that, to our knowledge, there has never been a case where the ransomers, the bad guys out there haven’t actually released the information back to their customers and supply the decryption key. Because they want the money and they want people to know, “Hey, if you pay us, we will give you your data back,” which is really, really frightening, actually. It’s happening all the time and needs to be monitored very, very carefully. This is certainly one of the major threats that exist today.

But there are other threats as well; peer-to-peer traffic, ToR traffic, things like that. Here’s an example of looking at a single affected IP that is talking to multiple different threat IPs that are known to have been hosting illicit content over this time period. You could see that, clearly, it’s doing something. You know, if there is one host that is talking to one outside illicit threat IP, okay, maybe that’s a coincidence or maybe it’s not an indication of something crazy going on. But when you can see that, in this case, there’s one internal IP talking to 89 known bad threat IPs who have been known to host illicit traffic, okay, that’s not a coincidence anymore. We know that something’s happening here. We can see when it happened. We know that they’re doing something. Let’s go investigate that. So that’s just another way of kind of giving you that first step to identify what’s happening and when it’s happening.

You know, sometimes, illicit traffic may just look like some obscured peer-to-peer content but it actually…Auditor, our product allows you to see it for full forensic evidence. You know, you could see what countries are talking to, what kind of traffic it is what kind of threat level it is. It really gives you that full-detailed data about what’s happening.

Here’s another example of a ToR threat. So people who are trying to use ToR to anonymize their data or get around any kind of traffic analysis-type system will use ToR to try and obfuscate that data. But we have, as part of our threat data, a list of ToR exits and relays and proxies, and we can look at that and tell you, again, who’s sending data into this sort of the ToR world out there, which may be an indication of ransomware and other malware because they often use ToR to try and anonymize that data. But it, also, could be somebody inside the organization that’s trying to do something they shouldn’t be doing, get data out which could be very nefarious. You never want to think the worst of people but it does happen. It happens every day out there. So again, that’s another way that we can give you some information about threats.

We, also, can help you visualize the threats. Sometimes, it’s easier for those to understand by looking at a nice graphical depiction. So we can show you where the traffic is moving, with the volume of traffic, how it’s hopping around in, in this case a ToR endpoint. ToR is weird. The point of ToR is that it’s very difficult to find an endpoint from another single endpoint. But being able to visualize it together actually allows you to kind of get a hand on where that traffic may be going.

In really large service providers where, certainly, people who are interested in tracking this stuff down, they need a product that can scale. We’ve got a very, very great story about our massive scalability. We can use a hierarchical system. We can add additional collectors. We can do a lot of different things to be able to handle a huge volume of traffic, even for Tier 1-type service providers, and still provide all of this data and detail that we’ve shown so far.

A couple other examples, we just have a number of them here, of different ways that you can look at the traffic and slice and dice it. Here’s an example of top conversations. So looking for that spike in traffic, we could see that there was this big spike here, suddenly. Almost 200 gig in one hour, that’s very unusual and can be identified very, very quickly and then you can try and say, “Okay, what were you doing during that time period? How could it possibly be that that much information was being sent out the door in such a short period of time?”

We also have port usage. So we can look at individual ports that are known threats over whatever time period you’re interested in. We could see this is port 80 traffic but it’s actually connecting to known ToR exits. So that is not just web surfing. You can visualize changes over time, you can see how things are increasing over time, and you can identify who is doing that to you.

Here’s another example of botnet forensics. Understanding a conversation to a known botnet command and control server and so many times, those come through, initially, as a phishing email. So they’ll just send millions of spam emails out there hoping for somebody to click on it. When they do click on it, it downloads the command and control software and then away it goes. So you can actually kind of see the low-level continual spam happening, and then all of a sudden, when there’s a spike, you actually get that botnet information, the command and control information that starts up and from there all kinds of bad things can happen.

So identifying impacted systems that have more than one infection is a great way to really sort of prioritize who you should be looking at. We can give you that data. I could see this IP has got all kinds of different threats that it’s been communicating to and with. You know, that is certainly someone that you want to take a look at very quickly.

I talked about visualization, some. Here are a few more examples of visualizations in the product. Many of our customers use this. It’s kind of the first way that they look at the data and then drill into the actual number part of the data, sort of after the visualization. Because you could see, from a high-level, where things are going and then say, “Okay, let me check that out.”

Another thing that we do as part of our cyber bundle, if you will, is anomaly detection and what we call “Two-phased Anomaly Detection.” Most of what I’ve talked about so far has been related to threat detection, matching up those known bads to conversations or communications into and out of your network. But there are other ways to try and identify security problems as well. One of those is anomaly detection.

So anomaly detection is an ability of our product to baseline traffic in your network, lots of different metrics on the traffic. So it’s counts, and flows, and packets, and bytes, and bits per second, and so forth, TCP flags, all happening all the time. So we’re baselining all the time, hour over hour, day over day and week over week to understand what is normal and then use our sophisticated behavior-based anomaly detection, our machine learning ability to identify when things are outside the norm.

So phase one is we baseline so that we know what is normal and then alert or identify when something is outside the norm and then phase two is running a diagnostic process on those events, so understanding what was that event, when did it happen, what kind of traffic was involved, what IPs and ports were involved, what interfaces did the traffic go through, what does it possibly pretend, was it a DDoS-type attack, was it port sweeper or crawler-type attack – what was it? And then the result of that is our alert diagnostic screen like you can see in the background.

So it qualifies the cause and impact for each offending behavior. It gives you the KPI information. It generates a ticket. It allows you to integrate with other third-party SNMP traps, trap receivers so we can send our alerts and diagnostic information out as a trap to another system and so everything can be rolled up into a more manager and manager-type system, if you wish. You can intelligently whitelist traffic that is not really offensive traffic that we may have identified as an anomaly. So of course, you want to reduce the amount of false positives out there and we can help you do that.

So to kind of summarize…I think we’re just about at the end of the presentation now. To summarize, what can CySight do in our cyber intelligence? It really comes down to forensics, anomaly detection, and that threat intelligence. We can record and analyze, on a very granular level, network data even in extremely complex, large, and challenging environments. We can evaluate what is normal versus what is abnormal. We can continually monitor and benchmark your network and assets. We can intelligently baseline your network to detect activity that deviates from those baselines. We can continuously monitor for communication with IPs of poor reputation and remediate it ASAP to reduce the probability of infection and we can help you store and compile that flow information to use as evidence in the future.

You’re going to end up with, then, extreme visibility into what’s happening. You’re going to have three-phase detection. You have full alerting and reporting. So any time any of these things do happen, you can get an alert. That alert can be an email. It can be a trap out to another system as I mentioned earlier. Things can be scheduled. They’re running in the background 24/7 keeping our software’s eyes on your network all the time and then give you that forensics drill-down capability to quickly identify what’s happened, what’s been impacted, and how you can stop its spread.

The last thing we just want to say is that everything that we’ve shown today is the result of a large development effort over the last number of years. We’ve been in business for over 10 years, delivering NetFlow-based Predictive AI Baselining analytics. We’ve really taken a very heavy development exercise into security over the last few years and we are constantly innovating. We’re constantly improving. We’re constantly listening to what our customers want and need and building that into future releases of the product.

So if you are an existing customer listening to this, we’d love to hear your feedback on what we can do better. If you are potentially a new customer on this webinar, we’d love your ideas from what you’ve seen as to if that fits with what you need or if there’s other things that you would like to see in the product. We really do listen to our customers quite extensively and because of that, we have a great reputation with our customers.

We have a list of customers up here. We’ve got some great quotes from our customers. We really do play across an entire enterprise. We play across service providers and we love our customers and we think that they know that and that’s why they continue to stay with us year after year and continue to work with us to make the product even better.

So we want to thank everybody for joining the webinar today. We’re going to just end on this note that we believe that our products offer the most cost-effective approach to detect threats and quantify network traffic ubiquitously across everything that you might need in the security and cyber network intelligence arena and if you have any interest in talking to us, seeing a demo, live demo of the product, getting a 30-day evaluation of the product, we’re very happy to talk to you. Just contact us.

If you’ve got a salesperson and you want to get threat intelligence, we’re happy to enable it on your existing platform. If you are new to us, hit our website, please, at cysight.ai. Fill out the form for a trial, and somebody will get to you immediately and we’ll get you up in the system and running very, very quickly and see if we can help you identify any of these security threats that you may have. So with that, we appreciate your time and look forward to seeing you at our webinar in the future. Bye.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Benefits of a NetFlow Performance Deployment in Complex Environments

Since no two environments are identical and no network remains stagnant in Network Monitoring today, the only thing we can expect is the unexpected!

The network has become a living dynamic and complex environment that requires a flexible approach to monitor and analyze. Network and Security teams are under pressure to go beyond simple monitoring techniques to quickly identify the root causes of issues, de-risk hidden threats and to monitor network-connected things.

A solution’s flexibility refers to not only its interface but also the overall design.

From a user interface perspective, flexibility refers to the ability to perform analysis on any combination of data fields with multiple options to view, sort, cut and count the analysis.

From a deployment perspective, flexibility means options for deployment on Linux or Windows environments and the ability to digest all traffic or scale collection with tuning techniques that don’t fully obfuscate the data.

Acquiring flexible tools are a superb investment as they enrich and facilitate local knowledge retention. They enable multiple network centric teams to benefit from a shared toolset and the business begins to leverage the power of big data Predictive AI Baselining analytics that, over time, grows and extends beyond the tool’s original requirements as new information becomes visible.

What makes a Network Management System (NMS) truly scalable is its ability to analyze all the far reaches of the enterprise using a single interface with all layers of complexity to the data abstracted.

NetFlow, sFlow, IPFIX and their variants are all about abstracting routers, switches, firewalls or taps from multiple vendors into a single searchable network intelligence.

It is critical to ensure that abstraction layers are independently scalable to enable efficient collection and be sufficiently flexible to enable multiple deployment architectures to provide low-impact, cost-effective solutions that are simple to deploy and manage.

To simplify deployment and management it has to work out the box and be self-configuring and self-healing. Many flow monitoring systems require a lot of time to configure or maintain making them expensive to deploy and hard to use.

A flow-based NMS needs to meet various alerting, Predictive AI Baselining analytics, and architectural deployment demands. It needs to adapt to rapid change, pressure on enterprise infrastructure and possess the agility needed to adapt at short notice.

Agility in provisioning services, rectifying issues, customizing and delivering alerts and reports and facilitating template creation, early threat detection and effective risk mitigation, all assist in propelling the business forward and are the hallmarks of a flexible network management methodology.

Here are some examples that require a flexible approach to network monitoring:

  • DDoS attack behavior changes randomly
  • Analyze Interface usage by Device by Datacenter by Region
  • A new unknown social networking application suddenly becomes popular
  • Compliance drives need to discover Insider threats and data leakages occurring under the radar
  • Companies grow and move offices and functions
  • Laws change requiring data retention suitable for legal compliance
  • New processes create new unplanned pressures
  • New applications cause unexpected data surges
  • A vetted application creates unanticipated denials of service
  • Systems and services become infected with new kinds of malicious agents
  • Virtualization demands abruptly increase
  • Services and resources require a bit tax or 95th percentile billing model
  • Analyzing flexible NetFlow fields supported by different device vendors such as IPv6, MPLS, MAC, BGP, VPN, NAT paths, DNS, URL, Latency etc.
  • Internet of Things (IoT) become part of the network ecosystem and require ongoing visibility to manage

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

Benefits of Network Security Forensics

The networks that your business operates on are often open and complex.

Your IT department is responsible for mitigating network risks, managing performance and auditing data to ensure functionality.

Using NetFlow forensics can help your IT team maintain the competitiveness and reliability of the systems required to run your business.

In IT, network security forensics involves the monitoring and analysis of your network’s traffic to gather information, obtain legal evidence and detect network intrusions.

These activities help keep your company perform the following actions.

  • Adjust to increased data and NetFlow volumes
  • Identify heightened security vulnerabilities and threats
  • Align with corporate and legislative compliance requirements
  • Contain network costs
  • Analyze network performance demands
  • Recommend budget-friendly implementations and system upgrades

NetFlow forensics helps your company maintain accountability and trace usage; these functions become increasingly difficult as your network becomes more intricate.

The more systems your network relies on, the more difficult this process becomes.

While your company likely has standard security measures in place, e.g. firewalls, intrusion detection systems and sniffers, they lack the capability to record all network activity.

Tracking all your network activity in real-time at granular levels is critical to the success of your organization.

Until recently, the ability to perform this type of network forensics has been limited due to a lack of scalability.

Now, there are web-based solutions that can collect and store this data to assist your IT department with this daunting task.

Solution capabilities include:

  • Record NetFlow data at a micro level
  • Discover security breaches and alert system administrators in real-time
  • Identify trends and establish performance baselines
  • React to irregular traffic movements and applications
  • Better provisioning of network services

The ability to capture all of this activity will empower your IT department to provide more thorough analysis and take faster action to resolve system issues.

But, before your company can realize the full value of NetFlow forensics, your team needs to have a clear understanding of how to use this intelligence to take full advantage of these detailed investigative activities.

Gathering the data through automation is a relatively simple process once the required automation tools have been implemented.

Understanding how to organize these massive amounts of data into clear, concise and actionable findings is an additional skill set that must be developed within your IT team.

Having a team member, whether internal or via a third-party vendor, that can aggregate your findings and create visual representations that can be understood by non-technical team members is a necessary part of NetFlow forensics. It is important to stress the necessity of visualization; this technique makes it much easier to articulate the importance of findings.

In order to accurately and succinctly visualize security issues, your IT staff must have a deep understanding of the standard protocols of your network. Without this level of understanding, the ability to analyze and investigate security issues is limited, if not impossible.

Utilizing a software to support the audit functions required to perform NetFlow forensics will help your company support the IT staff in the gathering and tracking of these standard protocols.

Being able to identify, track and monitor the protocols in an automated manner will enhance your staff’s ability to understand and assess the impact of these protocols on network performance and security. It will also allow you to quickly assess the impact of changes driven by real-time monitoring of your network processes.

Sound like a daunting task?

It doesn’t have to be. Choose a partner to support your efforts and help you build the right NetFlow forensics configuration to support your business.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

The Internet of Things (IoT) – pushing network monitoring to its limits

In the age of the Internet of Things (IoT), billions of connected devices – estimated at 20 billion by the year 2020 – are set to permeate virtually every aspect of daily life and industry. Sensors that track human movement in times of natural disasters, kitchen appliances reminding us to top up on food supplies and even military implementations such as situational awareness in wartime are just a few examples of IoT in action.

Exciting as these times may be, they also highlight a new set of risk factors for Security Specialists who need to answer the call for more vigorous, robust and proactive security solutions. Considerations around security monitoring and management are set to expand far beyond today’s norms as entry points, data volumes and connected hardware multiply at increasing rates in the age of hyper-interconnectedness.

Security monitoring will need to take a more preemptive stance in the age of IoT

With next-generation smart products being used in industries such as utilities, manufacturing, transportation, insurance, and logistics, networks will become exposed to new security vulnerabilities as IoT and enterprises intersect. Smart devices connected to the enterprise can easily act as a bridge to the network, potentially exposing organizations’ information assets. Apply this scenario to a world where virtually every device can communicate with the network from practically anywhere, and the need for more forward-thinking security monitoring becomes apparent. Device-to-device communications will need stronger encryption and ways for network teams to monitor and understand communications, behavior and data patterns. With more “unmanned” computers, appliances and devices coming on-line, understanding new network anomalies will be a challenge.

Networks will become far more heterogeneous

Embedded firmware, operating systems, shorter life-cycles, expanding capabilities and security considerations unique to IoT devices, will make networks far more complex and expansive than what they are today. IoT will hasten more heterogeneous environments, which security teams must be prepared for. The device influx will also drive IPv6 adoption and introduce new protocols. According to Technology.org, “Enterprises will have to look for solutions capable of guarding data gateways in IoT devices using tailored protocol filters and policy capabilities. Besides, regular security updates and patches will become integral to product lifecycle to eliminate every possibility of a compromise.”  This will increase reliance on technologies such as granular Netflow collection that provides forensics and anomaly detection, which offers enterprises, trusted security solutions that are both easily deployed and capable of evolving organically alongside new technologies as they are introduced to environments.

IoT will introduce new types of data into the enterprise

Traffic signal systems, power stations, water sanitation plants and other services vital to society are all incorporating IoT to some degree. Device security in a physical and non-physical context will be important as enterprises need to look at ways of preventing unauthorized entry into the network. Gartner asserts that, “IoT objects possess the ability to change the state of the environment around them, or even their own state (for example, by raising the temperature of a room automatically once a sensor has determined it is too cold, or by adjusting the flow of fluids to a patient in a hospital bed based on information about the patient’s medical records)”.

Considering the risk to human life inherent in hacks into systems of this nature, the level of monitoring and surveillance for compliance is becoming more pertinent each day as these kinds of threats are starting to occur. This will place a high demand on end-point security solutions to be both timely and accurate in its correlation of network data to give Security Teams the needed granularity to provide context around current and evolving risks.

The now infamous Chrysler hack is a primary example of the potentialities of IoT-based breaches and the threats they pose to human safety.

The role of Netflow in forearming the enterprise in the age of IoT

Monitoring systems will be required to identify, categorize and alert Network Operations Centers (NOCs) on a plethora of new datasets, demanding big data capabilities from their network monitoring solutions. NetFlow, if used correctly, can offer an opportunity to provide enterprises with substantial intelligence and an early warning mechanism to assist them in managing the steady move toward IoT and take a forearmed stance in security operations. Netflow’s ability to match to the scale at which the enterprise will grow means NOCs will neutralize the threat of being overwhelmed in a deluge of devices that will generate volumes of data that require around the clock monitoring.

They can achieve deep visibility – central to security in an IoT world – with a NetFlow monitoring, reporting and analysis tool that provides the ability to perform deep security forensics and intelligent baselining, anomaly detection, diagnostics and endpoint threat detection. NetFlow end-point solutions speak to the changing needs of the large environments by reducing Mean Time to Know (MTTK), which in turn shrinks Mean Time to Repair and Resolve (MTTR).

For more information on how CySight is helping organizations build comprehensive network security, performance and management solutions, contact us, or download a free copy of our guide on 8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health.

 8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health