Archives

Category Archive for ‘Peering Analytics’

3 Key Differences Between NetFlow and Deep Packet Inspection (DPI) Packet Capture Monitoring

The increasing density, complexity and expanse of modern networking environments have fueled the ongoing debate around which network analysis and monitoring tools serve the needs of the modern engineer best – placing Packet Capture and NetFlow Analysis to perform NDR at center-stage of the conversation.

Granted, when performing analysis of unencrypted traffic both can be extremely valuable tools in ongoing efforts to maintain and optimize complex environments, but as an engineer, I tend to focus on solutions that give me the insights I need without too much cost on my resources, while complementing my team’s ability to maintain and optimize the environments we support.

So with this in mind, let’s take a look at how NetFlow, in the context of the highly-dense networks we find today, delivers three key requirements network teams rely on for reliable end-to-end performance monitoring of their environments.

A NetFlow deployment won’t drain your resources

Packet Capture, also known as Deep Packet Inspection (DPI), once rich in network metrics has finally failed due to encryption and a segment based approach making it expensive to deploy and maintain. It requires a requires sniffing devices and agents throughout the network, which invariably require a huge of maintenance during their lifespan.

In addition, the amount of space required to store and analyze packet data makes it an inefficient an inelegant method of monitoring or forensic analysis. Combine this with the levels of complexity networks can reach today, and overall cost and maintenance associated with DPI can quickly become unfeasible. In the case of NetFlow, its wide vendor support across virtually the entire networking landscape makes almost every switch, router, vmware, GCP cloud, Azure Cloud, AWS cloud, vmWare velocloud or firewall a NetFlow / IPFIX / sflow / ixflow “ready” device. Devices’ built-in readiness to capture and export data-rich metrics makes it easy for engineers to deploy and utilize . Also, thanks to its popularity, CySight’s NetFlow analyzer provides varying feature-sets with enriched vendor specific flow fields are available for security operations center (SOC) network operations center (NOC) teams to gain full advantage of data-rich packet flows.

Striking the balance between detail and context

Considering how network-dependent and widespread applications have become in recent years, NetFlow, IPFIX, sFlow and ixFlow’s  ability to provide WAN-wide metrics in near real-time makes it a  suitable troubleshooting companion for engineers. Add to this enriched context enables a very complete qualification of impact from standard traffic analysis perspective as well as End point Threat views and Machine Learning and AI-Diagnostics.

Latest Flow methods extend the wealth of information as it collects via a template-based collection scheme, it strikes the balance between detail and high-level insight without placing too much demand on networking hardware – which is something that can’t be said for Deep Packet Inspection. Netflow’s constant evolution alongside the networking landscape is seeing it used as a complement to solutions such as Cisco’s NBAR , Packet Brokers such as KeySight, Ixia, Gigamon, nProbe, NetQuest, Niagra Networks, CGS Tower Networks, nProbe and other Packet Broker solutions have recognized that all they need to export flexible enriched flow fields to reveal details at the packet level.

NetFlow places your environment in greater context

Context is a chief area where Granular NetFlow beats out Packet Capture since it allows engineers to quickly locate root causes relating to Cyber Security, Threat Hunting, Root Cause and Performance by providing a more situational view of the environment, its data-flows, bottleneck-prone segments, application behavior, device sessions and so on.

One could argue that Deep Packet Inspection (DPI) is able to provide much of this information too, but as networks today are over 98% encrypted even using certificates won’t give engineers the broader context around the information it presents, thus hamstringing IT teams from detecting anomalies that could be subscribed to a number of factors such as cyber threats, untimely system-wide application or operating system updates or a cross-link backup application pulling loads of data across the WAN during operational hours.

So does NetFlow make Deep Packet Inspection obsolete?

Both Deep Packet Inspection (DPI) and legacy Netflow Analyzers cannot scale in retention so when comparing those two genres of solutions the only win a low end netflow analyzer solution will have against a DPI solution is that DPI is segment based so flow solution is inherently better as its agentless.

However, using NetFlow to identify an attack profile or illicit traffic can only be attained when flow retention is deep (granular) . However, NetFlow strikes that perfect balance between detail and context and gives SOC’s and NOCs intelligent insights that reveals broader factors that can influence your network’s ability to perform.

Gartner’s assertion that a balance of 80% NetFlow monitoring  coupled with 20% Packet Capture as the perfect combination of performance monitoring is false due to encryption’s rise but it is correct to attest to NetFlow’s growing prominence as the monitoring tool of choice and as it and its various iterations such sFlow, IPFIX, ixFlow, Flow logs and  others flow protocols continue to expand the breadth of context it provides network engineers, that margin is set to increase in its favor as time.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Scalable NetFlow – 3 Key Questions to Ask Your NetFlow Vendor

Why is flows per second a flawed way to measure a netflow collector’s capability?

Flows-per-second is often considered the primary yardstick to measure the capability of a netflow analyzers flow capture (aka collection) rate.

This seems simple on its face. The more flows-per-second that a flow collector can consume, the more visibility it provides, right? Well, yes and no.

The Basics

NetFlow was originally conceived as a means to provide network professionals the data to make sense of the traffic on their network without having to resort to expensive per segment based packet sniffing tools.

A flow record contains at minimum the basic information pertaining to a transfer of data through a router, switch, firewall, packet tap or other network gateway. A typical flow record will contain at minimum: Source IP, Destination IP, Source Port, Destination Port, Protocol, Tos, Ingress Interface and Egress Interface. Flow records are exported to a flow collector where they are ingested and information orientated to the engineers purposes are displayed.

Measurement

Measurement has always been how the IT industry expresses power and competency. However, a formula used to reflect power and ability changes when a technology design undergoes a paradigm shift.

For example, when expressing how fast a computer is we used to measure the CPU clock speed. We believed that the higher the clock speed the more powerful the computer. However, when multi-core chips were introduced the CPU power and speed dropped but the CPU in fact became more powerful. The primary clock speed measurement indicator became secondary to the ability to multi-thread.

The flows-per-second yardstick is misleading as it incorrectly reflects the actual power and capability of a flow collector to capture and process flow data and it has become prone to marketing exaggeration.

Flow Capture Rate

Flow capture rate ability is difficult to measure and to quantify a products scalability. There are various factors that can dramatically impact the ability to collect flows and to retain sufficient flows to perform higher-end diagnostics.

Its important to look not just at flows-per-second but at the granularity retained per minute (flow retention rate), the speed and flexibility of alerting, reporting, forensic depth and diagnostics and the scalability when impacted by high-flow-variance, sudden-bursts, number of devices and interfaces, the speed of reporting over time, the ability to retain short-term and historical collections and the confluence of these factors as it pertains to scalability of the software as a whole.

Scalable NetFlow and flow retention rates are particularly critical to determine as appropriate granularity is needed to deliver the visibility required to perform Anomaly Detection, Network Forensics, Root Cause Analysis, Billing substantiation, Peering Analysis and Data retention compliance.

The higher the flows-per-second and the flow-variance the more challenging it becomes to achieve a high flow-retention-rate to archive and retain flow records in a data warehouse.

A vendors capability statement might reflect a high flows-per-second consumption ability but many flow software tools have retention rate limitations by design.

It can mean that irrespective of achieving a high flow collection rate the netflow analyzer might only be capable of physically archiving 500 flows per minute. Furthermore, these flows are usually the result of sorting the flow data by top bytes to identify Top 10bandwidth abusers. Netflow products of this kind can be easily identified because they often tend to offer benefits orientated primarily to identifying bandwidth abuse or network performance monitoring.

Identifying bandwidth abusers is of course a very important benefit of a netflow analyzer. However, it has a marginal benefit today where a large amount of the abuse and risk is caused by many small flows.

These small flows usually fall beneath the radar screen of many netflow analysis products.  Many abuses like DDoS, p2p, botnets and hacker or insider data exfiltration continue to occur and can at minimum impact the networking equipment and user experience. Lack of ability to quantify and understand small flows creates great risk leaving organizations exposed.

Scalability

This inability to scale in short-term or historical analysis severely impacts a flow monitoring products ability to collect and retain critical information required in todays world where copious data has created severe network blind spots.

To qualify if a tool is really suitable for the purpose, you need to know more about the flows-per-second collection formula being provided by the vendor and some deeper investigation should be carried out to qualify the claims.

 

With this in mind here are 3 key questions to ask your NetFlow vendor to understand what their collection scalability claims really mean:

  1. How many flows can be collected per second?

  • Qualify if the flows per second rate provided is a burst rate or a sustained rate.
  • Ask how the collection and retention rates might be affected if the flows have high-flow variance (e.g. a DDoS attack).
  • How is the collection, archiving and reporting impacted when flow variance is increased by adding many devices and interfaces and distinct IPv4/IPv6 conversations and test what degradation in speed can you expect after it has been recording for some time.
  • Ask how the collection and retention rates might change if adding additional fields or measurements to the flow template (e.g. MPLS, MAC Address, URL, Latency)
  • How many flow records can be retained per minute?

  • Ask how the actual number of records inserted into the data warehouse per minute can be verified for short-term and historical collection.
  • Ask what happens to the flows that were not retained.
  • Ask what the flow retention logic is. (e.g. Top Bytes, First N)
  • What information granularity is retained for both short-term and historically?
    • Does the datas time granularity degrade as the data ages e.g. 1 day data retained per minute, 2 days retained per hour 5 days retained per quarter
    • Can you control the granularity and if so for how long?

 

Remember – Rate of collection does not translate to information retention.

Do you know whats really stored in the software’s database? After all you can only analyze what has been retained (either in memory or on disk) and it is that information retention granularity that provides a flow products benefits.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

NetFlow for Usage-Based Billing and Peering Analysis

Usagebased billing refers to the methods of calculating and passing back the costs of running a network to the consumers of data that occur through the network. Both Internet Service Providers (ISP) and Corporations have a need for Usage-based billing with different billing models.

NetFlow is the ideal technology for usage-based billing because it allows for the capture of all transactional information pertaining to the usage and some smart NetFlow technologies already exist to assist in the counting, allocation, and substantiation of data usage.

Advances in telecommunication technology have enabled ISPs to offer more convenient, streamlined billing options to customers based on bandwidth usage.

One billing model used most commonly by ISPs in the USA is known as the 95th percentile. The ISP filters the samples and disregards the highest 5% in order to establish the bill amount. This is an advantage to data consumers who have bursts of traffic because they’re not financially penalized for exceeding a traffic threshold for brief periods of time. The solution measures traffic employing a five-minute granularity standard typically over the course of a month.

The disadvantage of the 95th percentile model is that its not sustainable business model as data continues to become a utility like electricity.

A second approach is a utility-based metered billing model that involves retaining a tally of all bytes consumed by a customer with some knowledge of data path to allow for premium or free traffic plans.

Metered Internet usage is used in countries like Australia and most recently Canada who have nationally moved away from a 95th percentile model. This approach is also very popular in corporations whose business units share common network infrastructure and who are unwilling to accept “per user” cost, but rather a real consumption-based cost.

Benefits of usage-based billing are:

  • Improved transparency about the cost of services;
  • Costs feedback to the originator;
  • Raised cost sensitivity;
  • Good basis for active cost management;
  • The basis for Internal and external benchmarking;
  • Clear substantiation to increase bandwidth costs;
  • Shared infrastructure costs can also be based on consumption;
  • Network performance improvements.

For corporations, usage-based billing enables the IT department to become a shared service and viewed as a profit center rather than a cost center. It can become viewed as something that’s a benefit and a catalyst for business growth rather than a necessary but expensive line item in the budget.

For ISPs in the USA, there is no doubt that utility-based costs per byte model will continually be contentious as video and TV over Internet usage increases. In other regions, new business models that include packaging of video over “free zones” services have become popular meaning that the cost of premium content provision has fallen onto the content provider making utility billing viable in the USA.

NetFlow tools can include methods for building billing reports and offer a variety of usage-based billing model calculations.

Some NetFlow tools even include an API to allow the chart-of-accounts to be retained and driven from traditional accounting systems using the NetFlow system to focus on the tallying. Grouping algorithms should be flexible within the solution to allow for grouping of all different variables such as interfaces, applications, Quality of Service (QoS), MAC Addresses, MPLS, and IP groups. For ISPs and large corporations Asynchronous Network Numbers (ASN) also allow for analysis of data-paths allowing sensible negotiations with Peering partners and Content partners.

Look out for more discussion on peering in an upcoming blog…

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

5 Perks of Network Performance Management

Network performance management is something that virtually every business needs, but not something that every business is actively doing, or even aware of.  And why should they?

While understanding the technical side of things is best left to the IT department, understanding the benefits of a properly managed network is something that will help get the business managers on board, especially when good performance management solutions might be a cost that hadn’t been considered.  So what are the benefits?

1.  Avoiding downtime – Downtime across an entire network is going to be rare, but downtime in small areas of the network are possible if it gets overloaded.  Downtime of any kind is just not something that business can tolerate, for a few reasons:

  • it leaves that area of the network unmonitored, which is a serious security issue
  • shared files won’t be accessible, nor will they be updating as users save the files.  This will lead to multiple versions of the same file, and quite a few headaches when the network is accessible again
  • downtime that affects customers is even worse, and can result in lost revenue or negative customer experiences

2.  Network speed – This is one of the most important and easily quantified aspects of managing netflow.  It will affect every user on the network constantly, and anything that slows down users means either more work hours or delays.  Obviously, neither of these is a good problem to have.  Whether it’s uploading a file, sending a file to a coworker, or sending a file to a client; speed is of paramount importance.

3.  Scalability – Almost every business wants to grow, and nowhere is that more true than the tech sector.  As the business grows, the network will have to grow with it to support more employees and clients.  By managing the performance of the network, it is very easy to see when or where it is being stretched too thin or overwhelmed.  As performance degrades, it’s very easy to set thresholds that show when the network need upgraded or enlarged.

4.  Security – Arguably the most important aspect of network management, even though it might not be thought of as a performance aspect.  An unsecured network is worse than a useless network, and data breaches can ruin a company.  So how does this play into performance management?

By monitoring netflow performance, it’s easy to see where the most resources are being used.  Many security attacks drain resources, so if there are resource spikes in unusual areas it can point to a security flaw.  With proper software, these issues can be not only monitored, but also recorded and corrected.

5.  Usability – Unfortunately, not all employees have a working knowledge of how networks operate.  In fact, as many in IT support will attest, most employees aren’t tech savvy.  However, most employees will need to use the network as part of their daily work.  This conflict is why usability is so important.  The easiest way to minimize training costs with any network management program is to ensure that it is as user-friendly as possible.

The fanciest, most impressive network performance management system isn’t worth anything if no one knows how to use and optimize it properly.  Even if the IT department has no issues with it, the reports and general information should be as easy to decipher as possible.

Is your network as optimized as it could be?  Are you able to monitor the network’s performance and flow,  or do network forensics to determine where issues are?  Don’t try to tackle all of this on your own, contact us and let us help you support your business with the best network monitoring for your specific needs.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

What is NetFlow & How Can Organizations Leverage It?

NetFlow is a feature originally introduced on Cisco devices (but now generally available on many vendor devices) which provides the ability for an organization to monitor and collect IP network traffic entering or exiting an interface.
Through analysis of the data provided by NetFlow, a network administrator is able to detect things such as the source and destination of traffic, class of service, and the causes of congestion on the network.

NetFlow is designed to be utilized either from the software built into a router/switch or from external probes.

The purpose of NetFlow is to provide an organization with information about network traffic flow, both into and out of the device, by analyzing the first packet of a flow and using that packet as the standard for the rest of the flow. It has two variants which are designed to allow for more flexibility when it comes to implementing NetFlow on a network.

NetFlow was originally developed by Cisco around 1990 as a packet switching technology for Cisco routers and implemented in IOS 11.x.

The concept was that instead of having to inspect each packet in a “flow”, the device need only to inspect the first packet and create a “NetFlow switching record” or alternatively named “route cache record”.

After that that record was created, further packets in the same flow would not need to be inspected; they could just be forwarded based on the determination from the first packet. While this idea was forward thinking, it had many drawbacks which made it unsuitable for larger internet backbone routers.

In the end, Cisco abandoned that form of traffic routing in favor of “Cisco Express Forwarding”.

However, Cisco (and others) realized that by collecting and storing / forwarding that “flow data” they could offer insight into the traffic that was traversing the device interfaces.

At the time, the only way to see any information about what IP addresses or application ports were “inside” the traffic was to deploy packet sniffing systems which would sit inline (or connected to SPAN/Mirror) ports and “sniff” the traffic.  This can be an expensive and sometimes difficult solution to deploy.

Instead, by exporting the NetFlow data to an application which could store / process / display the information, network managers could now see many of the key meta-data aspects of traffic without having to deploy the “sniffer” probes.

Routers and switches which are NetFlow-capable are able to collect the IP traffic statistics at all interfaces on which NetFlow is enabled. This information is then exported as NetFlow records to a NetFlow collector, which is typically a server doing the traffic analysis.

There are two main NetFlow variants: Security Event Logging and Standalone Probe-Based Monitoring.

Security Event Logging was introduced on the Cisco ASA 5580 products and utilizes NetFlow v9 fields and templates. It delivers security telemetry in high performance environments and offers the same level of detail in logged events as syslog.

Standalone Probe-Based Monitoring is an alternative to flow collection from routers and switches and uses NetFlow probes, allowing NetFlow to overcome some of the limitations of router-based monitoring. Dedicated probes allow for easier implementation of NetFlow monitoring, but probes must be placed at each link to be observed and probes will not report separate input and output as a router will.

An organization or company may implement NetFlow by utilizing a NetFlow-capable device. However, they may wish to use one of the variants for a more flexible experience.

By using NetFlow, an organization will have insight into the traffic on its network, which may be used to find sources of congestion and improve network traffic flow so that the network is utilized to its full capability.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Seven Reasons To Analyze Network Traffic With NetFlow

NetFlow allows you to keep an eye on traffic and transactions that occur on your network. NetFlow can detect unusual traffic, a request for a malicious destination or a download of a larger file. NetFlow analysis helps you see what users are doing, gives you an idea of how your bandwidth is used and can help you improve your network besides protecting you from a number of attacks.

There are many reasons to analyze network traffic with NetFlow, including making your system more efficient as well as keeping it safe. Here are some of the reasons behind many organizations  adoption of NetFlow analysis:

  • Analyze all your network NetFlow allows you to keep track of all the connections occurring on your network, including the ones hidden by a rootkit. You can review all the ports and external hosts an IP address connected to within a specific period of time. You can also collect data to get an overview of how your network is used.

 

  • Track bandwidth use. You can use NetFlow to track bandwidth use and see reports on the average use of This can help you determine when spikes are likely to occur so that you can plan accordingly. Tracking bandwidth allows you to better understand traffic patterns and this information can be used to identify any unusual traffic patterns. You can also easily identify unusual surges caused by a user downloading a large file or by a DDoS attack.

 

  • Keep your network safe from DDoS attacks. These attacks target your network by overloading your servers with more traffic than they can handle. NetFlow can detect this type of unusual surge in traffic as well as identify the botnet that is controlling the attack and the infected computers following the botnet’s order and sending traffic to your network. You can easily block the botnet and the network of infected computers to prevent future attacks besides stopping the attack in progress.

 

  • Protect your network from malware. Even the safest network can still be exposed to malware via users connecting from home or via people bringing their mobile device to work. A bot present on a home computer or on a Smartphone could access your network but NetFlow will detect this type of abnormal traffic and with auto-mitigation tools automatically block it.
  • Optimize your cloud. By tracking bandwidth use, NetFlow can show you which applications slow down your cloud and give you an overview of how your cloud is used. You can also track performances to optimize your cloud and make sure your cloud service provider is offering a cloud solution that corresponds to what they advertised.
  • Monitor users. Everyone brings their own Smartphone to work nowadays and might use it for purposes other than work. Company data may be accessible by insiders who have legitimate access but have an inappropriate agenda downloading and sharing sensitive data with outside sources. You can keep track of how much bandwidth is used for data leakage or personal activities, such as using Facebook during work hours.
  • Data Retention Compliance. NetFlow can fill in the gaps where other technologies cannot deliver. A well-architected NetFlow solution can help business and service providers to achieve and maintain data retention compliance for a wide range of government and industry regulations.

NetFlow is an easy way to monitor your network and provides you with several advantages, including making your network safer and collecting the data you need to optimize it. Having access to a comprehensive overview of your network from a single pane of glass makes monitoring your network easy and enables you to check what is going on with your network with a simple glance.

CySight solutions takes the extra step to make life far easier for the network and security professional with smart alerts, actionable network intelligence, scalability and automated diagnostics and mitigation for a complete technology package.

CySight can provide you with the right tools to analyze traffic, monitor your network, protect it and optimize it. Contact us  to learn more about NetFlow and how you can get the most out of this amazing tool.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Two Ways Networks Are Transformed By NetFlow

According an article in techtarget.com “Your routers and switches can yield a mother lode of information about your network–if you know where to dig.”  The article goes on to say that excavating and searching through endless traffic data and logs manufactured by your network system is a lot like mining for gold, and punching random holes to look for a few nuggets of information isn’t very efficient. Your search will be much more fruitful if you know where to look and what it will look like. Fortunately, the data generated by a NetFlow traffic reporting protocol yields specific information and you can easily sort, view and analyze the information into what you want to use or need.In contemporary networks, there is a need to collect and retain a good set of traffic records for several different purposes. These include the ability to monitor traffic for network planning, security and analysis as well as track traffic usage for billing purposes. Every business experiences network problems. The goal is to transform these “bad behaving” networks by investigating the data that is being generated by the routers, switches and other hardware that make up the system.

  • Trace and repair network misconfigurations

Problems with networks can run the gamut from mismatched applications and hardware to wireless access points opened to accommodate BYOD users and other business uses. While there is always talk about software flaws and news about the latest internet threat, those things often distract IT pros from the real, every-day threat of unstable networks that have been configured to accommodate legacy hardware and a multitude of software applications.

The increasing complexity of the Internet itself, with the interconnection of lots of different devices and device types adds to the challenge of operating a computer network. Even though developing protocols to respond to unpredicted failures and misconfigurations is a workable solution, these out-of-date configurations can still cause frequent problems and denial of service (DOS). With many modern network devices monitoring functions and gathering data, retrieving and utilizing the NetFlow information makes tracing and repairing the problem of misconfigurations possible, easier and efficient.

  • Detect security breaches

There are many uses for NetFlow but one of the most important is the benefit of network security. This quote from an article by Wagner and Bernhard, describing worm and anomaly detection in fast IP networks, bears out the security problems facing governments, businesses, and internet users today.

“Large-scale network events, such as outbreaks of a fast Internet worm are difficult to detect in real-time from observed traffic, even more so if the observed network carries a large amount of traffic. Even during worm outbreaks, actual attack traffic is only a small fraction of the overall traffic. Its precise characteristics are usually unknown beforehand, making direct matching hard. It is desirable to derive parameters that describe large amounts of traffic data in such a way that details are hidden, but the type of changes associated with worm outbreaks and other network events are visible.”

NetFlow provides a 24/7 account of all network activity. There is an “unblinking” eye observing anything and everything that happens within the network boundaries. All the data needed to identify and enact a clean-up is recorded in the flow and this is invaluable to a security pro trying to reduce the impact of a breach in the network. NetFlow provides a visible, “what’s happening right now” view that other systems cannot provide. Most security systems alert after something has been detected, while NetFlow is constantly gathering information even when things seem to be just fine. In addition, NetFlow-based analysis relies on traffic behavior and algorithms which provides rapid detection of breaches that other technologies often miss

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

How Traffic Accounting Keeps You One Step Ahead Of The Competition

IT has steadily evolved from a service and operational delivery mechanism to a strategic business investment. Suffice it to say that the business world and technology have become so intertwined that it’s unsurprising many leading companies within their respective industries attribute their success largely to their adoptive stance toward innovation.

Network Managers know that much of their company’s ability to outmaneuver the competition depends to a large extent on IT Ops’ ability to deliver world-class services. This brings traffic accounting into the conversation, since a realistic and measured view of your current and future traffic flows is central to building an environment in which all the facets involved in its growth, stability and performance are continually addressed.

In this blog, we’ll take a look at how traffic accounting places your network operations center (NOC) team on the front-foot in their objective to optimize the flow of your business’ most precious cargo – its data.

All roads lead to performance baselining 

Performance baselines lay the foundation for network-wide traffic accounting against predetermined environment thresholds. They also aid IT Ops teams in planning for network growth and expansion undertakings. Baseline information typically contains statistics on network utilization, traffic components, conversation and address statistics, packet information and key device metrics.

It serves as your network’s barometer by informing you when anomalies such as excessive bandwidth consumption and other causes of bottlenecks occur. For example, root causes to performance issues can easily creep into an environment unnoticed: such as a recent update to a business critical application that may cause significant spikes in network utilization.  Armed with a comprehensive set of baseline statistics and data that allows Network Performance and Security Specialists to measure, compare and analyze network metrics,   root causes such as these can be identified with elevated efficiency.

In broader applications, baselining gives Network Engineers a high-level view of their environments, thereby allowing them to configure Quality of Service (QoS) parameters, plan for upgrades and expansions, detect and monitor trends and peering analysis and a bevy of other functions.

Traffic accounting brings your future network into focus

With new-generation technologies such as the cloud, resource virtualization, as a service platforms and mobility revolutionizing the networks of yesteryear, capacity planning has taken on a new level of significance. Network monitoring systems (NMS) need to meet the demands of the new, complex, hybrid systems that are the order of the day. Thankfully, technologies such as NetFlow have evolved steadily over the years to address the monitoring demands of modern networks. NetFlow accounting is a reliable way to peer through the wire and get a deeper insight to the traffic that traverses your environment. Many Network Engineers and Security Specialists will agree that their understanding of their environments hinges on the level of insight they glean from their monitoring solutions.

This makes NetFlow an ideal traffic accounting medium, since it easily collects and exports data from virtually any connected device for analysis by a CySight . The technology’s standing in the industry has made it the “go-to” solution for curating detailed, insightful and actionable metrics that move IT organizations from a reactive to proactive stance towards network optimization

Traffic accounting’s influence on business productivity and performance

As organizations become increasingly technology-centric in their business strategies, their reliance on networks that consistently perform at peak will increase accordingly. This places new pressures on Network Performance and Security Teams  to conduct iterative performance and capacity testing to contextualize their environment’s ability to perform when it matters most. NetFlow’s ability to provide contextual insights based on live and historic data means Network Operation Centers (NOCs)  are able to react to immediate performance hindrances and also predict with a fair level of accuracy what the challenges of tomorrow may hold. And this is worth gold in the context of the ever-changing and expanding networking landscape.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Benefits of a NetFlow Performance Deployment in Complex Environments

Since no two environments are identical and no network remains stagnant in Network Monitoring today, the only thing we can expect is the unexpected!

The network has become a living dynamic and complex environment that requires a flexible approach to monitor and analyze. Network and Security teams are under pressure to go beyond simple monitoring techniques to quickly identify the root causes of issues, de-risk hidden threats and to monitor network-connected things.

A solution’s flexibility refers to not only its interface but also the overall design.

From a user interface perspective, flexibility refers to the ability to perform analysis on any combination of data fields with multiple options to view, sort, cut and count the analysis.

From a deployment perspective, flexibility means options for deployment on Linux or Windows environments and the ability to digest all traffic or scale collection with tuning techniques that don’t fully obfuscate the data.

Acquiring flexible tools are a superb investment as they enrich and facilitate local knowledge retention. They enable multiple network centric teams to benefit from a shared toolset and the business begins to leverage the power of big data Predictive AI Baselining analytics that, over time, grows and extends beyond the tool’s original requirements as new information becomes visible.

What makes a Network Management System (NMS) truly scalable is its ability to analyze all the far reaches of the enterprise using a single interface with all layers of complexity to the data abstracted.

NetFlow, sFlow, IPFIX and their variants are all about abstracting routers, switches, firewalls or taps from multiple vendors into a single searchable network intelligence.

It is critical to ensure that abstraction layers are independently scalable to enable efficient collection and be sufficiently flexible to enable multiple deployment architectures to provide low-impact, cost-effective solutions that are simple to deploy and manage.

To simplify deployment and management it has to work out the box and be self-configuring and self-healing. Many flow monitoring systems require a lot of time to configure or maintain making them expensive to deploy and hard to use.

A flow-based NMS needs to meet various alerting, Predictive AI Baselining analytics, and architectural deployment demands. It needs to adapt to rapid change, pressure on enterprise infrastructure and possess the agility needed to adapt at short notice.

Agility in provisioning services, rectifying issues, customizing and delivering alerts and reports and facilitating template creation, early threat detection and effective risk mitigation, all assist in propelling the business forward and are the hallmarks of a flexible network management methodology.

Here are some examples that require a flexible approach to network monitoring:

  • DDoS attack behavior changes randomly
  • Analyze Interface usage by Device by Datacenter by Region
  • A new unknown social networking application suddenly becomes popular
  • Compliance drives need to discover Insider threats and data leakages occurring under the radar
  • Companies grow and move offices and functions
  • Laws change requiring data retention suitable for legal compliance
  • New processes create new unplanned pressures
  • New applications cause unexpected data surges
  • A vetted application creates unanticipated denials of service
  • Systems and services become infected with new kinds of malicious agents
  • Virtualization demands abruptly increase
  • Services and resources require a bit tax or 95th percentile billing model
  • Analyzing flexible NetFlow fields supported by different device vendors such as IPv6, MPLS, MAC, BGP, VPN, NAT paths, DNS, URL, Latency etc.
  • Internet of Things (IoT) become part of the network ecosystem and require ongoing visibility to manage

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility