Archives

Category Archive for ‘Network Monitoring’

3 Key Differences Between NetFlow and Deep Packet Inspection (DPI) Packet Capture Monitoring

The increasing density, complexity and expanse of modern networking environments have fueled the ongoing debate around which network analysis and monitoring tools serve the needs of the modern engineer best – placing Packet Capture and NetFlow Analysis to perform NDR at center-stage of the conversation.

Granted, when performing analysis of unencrypted traffic both can be extremely valuable tools in ongoing efforts to maintain and optimize complex environments, but as an engineer, I tend to focus on solutions that give me the insights I need without too much cost on my resources, while complementing my team’s ability to maintain and optimize the environments we support.

So with this in mind, let’s take a look at how NetFlow, in the context of the highly-dense networks we find today, delivers three key requirements network teams rely on for reliable end-to-end performance monitoring of their environments.

A NetFlow deployment won’t drain your resources

Packet Capture, also known as Deep Packet Inspection (DPI), once rich in network metrics has finally failed due to encryption and a segment based approach making it expensive to deploy and maintain. It requires a requires sniffing devices and agents throughout the network, which invariably require a huge of maintenance during their lifespan.

In addition, the amount of space required to store and analyze packet data makes it an inefficient an inelegant method of monitoring or forensic analysis. Combine this with the levels of complexity networks can reach today, and overall cost and maintenance associated with DPI can quickly become unfeasible. In the case of NetFlow, its wide vendor support across virtually the entire networking landscape makes almost every switch, router, vmware, GCP cloud, Azure Cloud, AWS cloud, vmWare velocloud or firewall a NetFlow / IPFIX / sflow / ixflow “ready” device. Devices’ built-in readiness to capture and export data-rich metrics makes it easy for engineers to deploy and utilize . Also, thanks to its popularity, CySight’s NetFlow analyzer provides varying feature-sets with enriched vendor specific flow fields are available for security operations center (SOC) network operations center (NOC) teams to gain full advantage of data-rich packet flows.

Striking the balance between detail and context

Considering how network-dependent and widespread applications have become in recent years, NetFlow, IPFIX, sFlow and ixFlow’s  ability to provide WAN-wide metrics in near real-time makes it a  suitable troubleshooting companion for engineers. Add to this enriched context enables a very complete qualification of impact from standard traffic analysis perspective as well as End point Threat views and Machine Learning and AI-Diagnostics.

Latest Flow methods extend the wealth of information as it collects via a template-based collection scheme, it strikes the balance between detail and high-level insight without placing too much demand on networking hardware – which is something that can’t be said for Deep Packet Inspection. Netflow’s constant evolution alongside the networking landscape is seeing it used as a complement to solutions such as Cisco’s NBAR , Packet Brokers such as KeySight, Ixia, Gigamon, nProbe, NetQuest, Niagra Networks, CGS Tower Networks, nProbe and other Packet Broker solutions have recognized that all they need to export flexible enriched flow fields to reveal details at the packet level.

NetFlow places your environment in greater context

Context is a chief area where Granular NetFlow beats out Packet Capture since it allows engineers to quickly locate root causes relating to Cyber Security, Threat Hunting, Root Cause and Performance by providing a more situational view of the environment, its data-flows, bottleneck-prone segments, application behavior, device sessions and so on.

One could argue that Deep Packet Inspection (DPI) is able to provide much of this information too, but as networks today are over 98% encrypted even using certificates won’t give engineers the broader context around the information it presents, thus hamstringing IT teams from detecting anomalies that could be subscribed to a number of factors such as cyber threats, untimely system-wide application or operating system updates or a cross-link backup application pulling loads of data across the WAN during operational hours.

So does NetFlow make Deep Packet Inspection obsolete?

Both Deep Packet Inspection (DPI) and legacy Netflow Analyzers cannot scale in retention so when comparing those two genres of solutions the only win a low end netflow analyzer solution will have against a DPI solution is that DPI is segment based so flow solution is inherently better as its agentless.

However, using NetFlow to identify an attack profile or illicit traffic can only be attained when flow retention is deep (granular) . However, NetFlow strikes that perfect balance between detail and context and gives SOC’s and NOCs intelligent insights that reveals broader factors that can influence your network’s ability to perform.

Gartner’s assertion that a balance of 80% NetFlow monitoring  coupled with 20% Packet Capture as the perfect combination of performance monitoring is false due to encryption’s rise but it is correct to attest to NetFlow’s growing prominence as the monitoring tool of choice and as it and its various iterations such sFlow, IPFIX, ixFlow, Flow logs and  others flow protocols continue to expand the breadth of context it provides network engineers, that margin is set to increase in its favor as time.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Hunt SUNBURST and Trojans with Turbocharged Netflow.

US: December 13 of 2020 was an eye-opener worldwide as Solarwinds software Orion, was hacked using a trojanized update known as SUNBURST backdoor vulnerability. The damage reached thousands of customers, many of which are world leaders in their markets like Intel, Microsoft, Lockheed, Visa, and several USA  governmental agencies. The extent of the damage has not been fully quantified as still more is being learned, nevertheless, the fallout includes real-world harm.

The recent news of the SolarWinds Orion hack is very unfortunate. The hack has left governments and customers who used the SolarWinds Orion tools especially vulnerable and the fallout will still take many months to be recognized. This is a prime example of an issue where a flow metadata tool has the inability to retain sufficient records, causing ineffective intelligence, and that the inability to reveal hidden issues and threats is now clearly impacting organizations’ and government networks and connected assets.

Given what we already know and that more is still being learned, it makes good sense to investigate an alternative solution.

 
 

What Is the SUNBURST Trojan Attack?

SUNBURST, as named by FireEye, is a kind of malware that acts as a trojan horse designed to look like a safe and trustworthy update for Solarwinds customers. To accomplish such infiltration to seemingly well-protected organizations, the hackers had to first infiltrate the Solarwinds infrastructure. Once Solarwinds was successfully hacked, the bad actors could now rely on the trust between Solarwinds and the targeted organizations to carry out the attack. The malware, which looked like a routine update, was in fact creating a back door, compromising the Solarwinds Orion software and any customer who updates their system.

How was SUNBURST detected?

Initially, SUNBURST malware was completely undetected for some time. The attackers started to install a remote access tool malware into the Solarwinds Orion software all the way back in March of 2020, essentially trojaning them. On December 8, 2020, FireEye discovered their own red team tools have been stolen and started to investigate while reporting the event to the NSA. The NSA, also a Solarwinds software user, who is responsible for the USA cybersecurity defense, was unaware of the hack at the time. A few days later, as soon as the information became more public, different cybersecurity firms began to work on reverse engineering and analyzing the hack.

IT’S WHAT WE DON’T SEE THAT POSES THE BIGGEST THREATS AND INVISIBLE DANGERS!

You may be surprised to learn that most well-known tools lack the REAL Visibility that could have prevented attacks on a network and its local and cloud-connected assets. There are some serious shortcomings in the base designs of other flow solutions that result in their inability to scale in retention. This is why smart analysts are realizing that Threat Intelligence and Flow Analytics today is all about having access to long term granular intelligence.

From a forensics perspective, you would appreciate that you can only analyze the data you retain, and with large and growing network and cloud data flows most tools (regardless of their marketing claims) actually cannot scale in retention and choose to drop records in lieu of what they believe is salient data.

Imputed outcome data leads to misleading results and missing data causes high risk and loss!​

A simple way to think about this is if you could imagine trying to collect water from a blasting fire hose into a drinking cup. You just simply cannot collect very much!

Many engineers build scripts to try to attain the missing visibility and do a lot of heavy lifting and then finally come to the realization that no matter how much lifting you do that if the data ain’t there you can’t analyze it.

We found that over 95% of network and cyber visibility tools retain as little as 2% to 5% of all information collected resulting in completely missed analytics, severely misleading analytics, and risk!

How does CySight hunt SUNBURST and other Malware?

It’s often necessary to try and look back with new knowledge that we become aware of to analyze.

For a recently discovered Ransomware or Trojan, such as SUNBURST, it is helpful to see if it’s been active in the past and when it started. Another example is being able to analyze all the related traffic and qualify how long a specific user or process has been exfiltrating an organization’s Intellectual Property and quantify the risk.

SUNBURST enabled the criminals to install a Remote Access Trojan (RAT). RATs, like most Malware, are introduced as part of legitimate-looking files. Once enabled they allow the hacker to view a screen or a terminal session enabling them to look for sensitive data like customer’s credit cards, intellectual property or sensitive company or government secrets.

Even though many antivirus products can identify many RAT signatures, the software and protocols used to view remotely and to exfiltrate files continues to evade many malware detection systems. We must therefore turn to traffic analytics and machine learning to identify traffic behaviors and data movements that are out of the ordinary.

Anonymity by Obscurity

Anonymity_by_obscurity

In order to evade detection, hackers try to hide in plain sight and use protocols that are not usually blocked like DNS, HTTP, and Port 443 to exfiltrate your data.

Sharding_who_what_where_when

Many methods are used to exfiltrate your data. An often-used method is to use p2p technologies to break files into small pieces and slowly send the data unnoticed by other monitoring systems. Due to CySight’s small footprint Dropless Collection you can easily identify sharding and our anomaly detection will identify the outlier traffic and quickly bring it to your attention. When used in conjunction with a packet broker partner such as Keysight, Gigamon, nProbe or other supported packet metadata exporter, CySight provides the extreme application intelligence to help you with complete visibility to control the breach.

Identifying exposure

Onion_routing_Malware_phone_home

In todays connected world every incident has a communications component

You need to keep in mind that all Malware needs to “call home” and today this is going to be through onion routed connections, encrypted VPNs, or via zombies that have been seeded as botnets making it difficult if not impossible to identify the hacking teams involved which may be personally, commercially or politically motivated bad actors.

Multi-focal threat hunting

Threat hunting for SUNBURST or other Malware requires multi-focal analysis at a granular level that simply cannot be attained by sampling methods. It does little good to be alerted to a possible threat without having the detail to understand context and impact. The Hacker who has control of your system will likely install multiple back-doors on various interrelated systems so they can return when you are off guard.

CySight Turbocharges Flow and Cloud analytics for SecOps and NetOps

As with all CySight analytics and detection, you don’t have to do any heavy lifting. We do it all for you!

There is no need to create or maintain special groups with Sunburst or other Malware IP addresses or domains. Every CySight instance is built to keep itself aware of new threats that are automatically downloaded in a secure pipe from our Threat Intelligence qualification engine that collects, collates and categorizes threats from around the globe or from partner threat feeds.

CySight Identifies your systems conversing with Bad Actors and allows you to back track through historical data to see how long it’s been going on.

Distributed_threat_collection

Using Big Data threat feeds collated from multiple sources, thousands of IPs of bad reputation are correlated in real-time with your traffic against threat data that is freshly derived from many enterprises and sources to provide effective visibility of threats and attackers.

  • Cyber feedback

  • Global honeypots

  • Threat feeds

  • Crowd sources

  • Active crawlers

  • External 3rd Party

So how exactly do you go about turbocharging your Flow and Cloud metadata?

CySight software is capable of the highest level of granularity, scalability, and flexibility available in the network and cloud flow metadata market. Lack of granular visibility is one of, if not the main flaw in such products today as they retain as little as 2% to 5% of all information collected, due to inefficient design, severely impacting visibility and risk as a result of missing and misleading analytics, costing organizations greatly.

CySight’s Intelligent Visibility, Dropless Collection, automation, and machine intelligence reduce the heavy lifting in alerting, auditing, and discovering your network making performance analytics, anomaly detection, threat intelligence, forensics, compliance, zero trust and IP accounting and mitigation a breeze!

Cyberwar Defense using Predictive AI Baselining

The world is bracing for a worldwide cyberwar as a result of the current political events. Cyberattacks can be carried out by governments and hackers in an effort to destabilize economies and undermine democracy. Rather than launching cyberattacks, state-funded cyber warfare teams have been studying vulnerabilities for years.

An important transition has occurred, and it is the emergence of bad actors from unfriendly countries that must be taken seriously. The most heinous criminals in this new cyberwarfare campaign are no longer hiding. Experts now believe that a country could conduct more sophisticated cyberattacks on national and commercial networks. Many countries are capable of conducting cyberattacks against other countries, and all parties appear to be prepared for cyber clashes.

So, how would cyberwarfare play out, and how can organizations defend against them?

The first step is to presume that your network has been penetrated or will be compromised soon, and that several attack routes will be employed to disrupt business continuity or vital infrastructure.

Denial-of-service (DoS/DDoS) attacks are capable of spreading widespread panic by overloading network infrastructures and network assets, rendering them inoperable, whether they are servers, communication lines, or other critical technologies in a region.

In 2021, ransomware became the most popular criminal tactic, but country cyber warfare teams in 2022 are now keen to use it for first strike, propaganda and military fundraising. It is only a matter of time before it escalates. Ransomware tactics are used in politically motivated attacks to encrypt computers and render them inoperable. Despite using publicly accessible ransomware code, this is now considered weaponized malware because there is little to no possibility that a key to decode will be released. Ransomware assaults by financially motivated criminals have a different objective, which must be identified before causing financial and social damage, as detailed in a recent RANSOMWARE PAPER

To win the cyberwar on either cyber extortion or cyberwarfare attacks, you must first have complete 360-degree view into your network and deep transparency and intelligent context to detect dangers within your data.

Given what we already know and the fact that more is continually being discovered, it makes sense to evaluate our one-of-a-kind integrated Predictive AI Baselining and Cyber Detection solution.

YOU DON’T KNOW WHAT YOU DON’T KNOW!

AND IT’S WHAT WE DON’T SEE THAT POSES THE BIGGEST THREATS AND INVISIBLE DANGERS!

You may be surprised to learn that most tools lack the REAL Visibility that could have prevented attacks on a network and its local and cloud-connected assets. There are some serious shortcomings in the base designs of other flow solutions that result in their inability to scale in retention.

This is why smart analysts are realizing that Threat Intelligence and Flow Analytics today are all about having access to long-term granular intelligence. From a forensics perspective, you would appreciate that you can only analyze the data you retain, and with large and growing network and cloud data flows most tools (regardless of their marketing claims) actually cannot scale in retention and choose to drop records in lieu of what they believe is salient data.

Imputed outcome data leads to misleading results and missing data causes high risk and loss!

Funnel_Loss_Plus_Text

So how exactly do you go about defending your organizations network and connected assets?

Our approach with CySight focuses on solving Cyber and Network Visibility using granular Collection and Retention with machine learning and A.I.

CySight was designed from the ground up with specialized metadata collection and retention techniques thereby solving the issues of archiving huge flow feeds in the smallest footprint and the highest granularity available in the marketplace.

Network issues are broad and diverse and can occur from many points of entry, both external and internal. The network may be used to download or host illicit materials and leak intellectual property.

Additionally, ransomware and other cyber-attacks continue to impact businesses. So you need both machine learning and End-Point threats to provide a complete view of risk.

The Idea of flow-based analytics is simple yet potentially the most powerful tool to find ransomware and other network and cloud issues. All the footprints of all communications are sent in the flow data and given the right tools you could retain all the evidence of an attack or infiltration or exfiltration.

However, not all flow analytic solutions are created equal and due to the inability to scale in retention the Netflow Ideal becomes unattainable. For a recently discovered Ransomware or Trojan, such as “Wannacry”, it is helpful to see if it’s been active in the past and when it started.

Another important aspect is having the context to be able to analyze all the related traffic to identify concurrent exfiltration of an organization’s Intellectual Property and to quantify and mediate the risk. Threat hunting for RANSOMWARE requires multi-focal analysis at a granular level that simply cannot be attained by sampling methods. It does little good to be alerted to a possible threat without having the detail to understand context and impact. The Hacker who has control of your system will likely install multiple back-doors on various interrelated systems so they can return when you are off guard.

CySight Turbocharges Flow and Cloud analytics for SecOps and NetOps

As with all CySight Predictive AI Baselining analytics and detection, you don’t have to do any heavy lifting. We do it all for you!

There is no need to create or maintain special groups with Ransomware or other endpoints of ill-repute. Every CySight instance is built to keep itself aware of new threats that are automatically downloaded in a secure pipe from our Threat Intelligence qualification engine that collects, collates, and categorizes threats from around the globe or from partner threat feeds.

CySight Identifies your systems conversing with Bad Actors and allows you to backtrack through historical data to see how long it’s been going on.

Summary

IdeaData’s CySight software is capable of the highest level of granularity, scalability, and flexibility available in the network and cloud flow metadata market and supports the broadest range of flow-capable vendors and flow logs.

CySight’s Predictive AI Baselining, Intelligent Visibility, Dropless Collection, automation, and machine intelligence reduce the heavy lifting in alerting, auditing, and discovering your network making threat intelligence, anomaly detection, forensics, compliance, performance analytics and IP accounting a breeze!

Let us help you today. Please schedule a time to meet https://calendly.com/cysight/

Advanced Predictive AI leveraging Granular Flow-Based Network Analytics.

IT’S WHAT YOU DON’T SEE THAT POSES THE BIGGEST THREATS AND INVISIBLE DANGERS.

Existing network management and network security point solutions are facing a major challenge due to the increasing complexity of the IT infrastructure.

The main issue is a lack of visibility into all aspects of physical network and cloud network usage, as well as increasing compliance, service level management, regulatory mandates, a rising level of sophistication in cybercrime, and increasing server virtualization.

With appropriate visibility and context, a variety of network issues can be resolved and handled by understanding the causes of network slowdowns and outages, detecting cyber-attacks and risky traffic, determining the origin and nature, and assessing the impact.

It’s clear that in today’s work-at-home, cyberwar, ransomware world, having adequate network visibility in an organization is critical, but defining how much visibility is considered “right” visibility is becoming more difficult, and more often than not even well-seasoned professionals make incorrect assumptions about the visibility they think they have. These misperceptions and malformed assumptions are much more common than you would expect and you would be forgiven for thinking you have everything under control.

When it comes to resolving IT incidents and security risks and assessing the business impact, every minute counts. The primary goal of Predictive AI Baselining coupled with deep contextual Network Forensics is to improve the visibility of Network Traffic by removing network blindspots and identifying the sources and causes of high-impact traffic.

Inadequate solutions (even the most well-known) mislead you into a false level of comfort but as they tend to only retain the top 2% or 5% of network communications frequently cause false positives and red herrings. Cyber threats can come from a variety of sources. These could be the result of new types of crawlers or botnets, infiltration and ultimately exfiltration that can destroy a business.

Networks are becoming more complex. Because of negligence, failing to update and patch security holes, many inadvertent threats can open the door to malicious outsiders. Your network could be used to download or host illegal materials, or it could be used entirely or partially to launch an attack. Ransomware attacks are still on the rise, and new ways to infiltrate organizations are being discovered. Denial of Service (DoS) and distributed denial of service (DDoS) attacks continue unabated, posing a significant risk to your organization. Insider threats can also occur as a result of internal hacking or a breach of trust, and your intellectual property may be slowly leaked as a result of negligence, hacking, or being compromised by disgruntled employees.

Whether you are buying a phone a laptop or a cyber security visibility solution the same rule applies and that is that marketers are out to get your hard-earned cash by flooding you with specifications and solutions whose abilities are radically overstated. Machine Learning  (ML) and Artificial Intelligence (AI) are two of the most recent to join the acronyms. The only thing you can know for sure dear cyber and network professional reader is that they hold a lot of promise.

One thing I can tell you from many years of experience in building flow analytics, threat intelligence, and cyber security detection solutions is that without adequate data your results become skewed and misleading. Machine Learning and AI enable high-speed detection and mitigation but without Granular Analytics (aka Big Data) you won’t know what you don’t know and neither will your AI!

In our current Covid world we have all come to appreciate, in some way, the importance of big data, ML and AI that if properly applied, just how quickly it can help mitigate a global health crisis. We only have to look back a few years when drug companies didn’t have access to granular data the severe impact that poor data had on people’s lives. Thalidomide is one example. In the same way, when cyber and network visibility solutions are only surface scraping data information will be incorrect and misleading and could seriously impact your network and the livelihoods of the people you work for and together with.

The Red Pill or The Blue Pill?

The concept of flow or packet-based analytics is straightforward, yet they have the potential to be the most powerful tools for detecting ransomware and other network and cloud-related concerns. All communications leave a trail in the flow data, and with the correct tools, you can recover all evidence of an assault, penetration, or exfiltration.

Not all analytic systems are made equal, and the flow/packet ideals become unattainable for other tools because of their difficulty to scale in retention. Even well-known tools have serious flaws and are limited in their ability to retain complete records, which is often overlooked. They don’t effectively provide the visibility of the blindspots they claimed.

As already pointed out, over 95% of network and deep packet inspection (DPI) solutions struggle to retain even 2% to 5% of all data captured in medium to large networks, resulting in completely missing diagnoses and delivering significantly misleading analytics that leads to misdiagnosis and risk!

It is critical to have the context and visibility necessary to assess all relevant traffic to discover concurrent intellectual property exfiltration and to quantify and mitigate the risk. It’s essential to determine whether a newly found Trojan or Ransomware has been active in the past and when it entered and what systems are still at risk.

Threat hunting demands multi-focal analysis at a granular level that sampling, and surface flow analytics methods just cannot provide. It is ineffective to be alerted to a potential threat without the context and consequence. The Hacker who has gained control of your system is likely to install many backdoors on various interconnected systems to re-enter when you are unaware. As Ransomware progresses it will continue to exploit weaknesses in Infrastructures.

Often those most vulnerable are those who believe they have the visibility to detect.

Network Matrix of Knowledge

Post-mortem analysis of incidents is required, as is the ability to analyze historical behaviors, investigate intrusion scenarios and potential data breaches, qualify internal threats from employee misuse, and quantify external threats from bad actors.

The ability to perform network forensics at a granular level enables an organization to discover issues and high-risk communications happening in real-time, or those that occur over a prolonged period such as data leaks. While standard security devices such as firewalls, intrusion detection systems, packet brokers or packet recorders may already be in place, they lack the ability to record and report on every network traffic transfer over the long term.

According to industry analysts, enterprise IT security necessitates a shift away from prevention-centric security strategies and toward information and end-user-centric security strategies focused on an infrastructure’s endpoints, as advanced targeted attacks are poised to render prevention-centric security strategies obsolete and today with Cyberwar a reality that will impact business and government alike.

As every incident response action in today’s connected world includes a communications component, using an integrated cyber and network intelligence approach provides a superior and cost-effective way to significantly reduce the Mean Time To Know (MTTK) for a wide range of network issues or risky traffic, reducing wasted effort and associated direct and indirect costs.

Understanding The shift towards Flow-Based Metadata

for Network and Cloud Cyber-Intelligence

  • The IT infrastructure is continually growing in complexity.
  • Deploying packet capture across an organization is costly and prohibitive especially when distributed or per segment.
  • “Blocking & tackling” (Prevention) has become the least effective measure.
  • Advanced targeted attacks are rendering prevention‑centric security strategies obsolete.
  • There is a Trend towards information and end‑user centric security strategies focused on an infrastructure’s end‑points.
  • Without making use of collective sharing of threat and attacker intelligence you will not be able to defend your business.

So what now?

If prevention isn’t working, what can IT still do about it?

  • In most cases, information must become the focal point for our information security strategies. IT can no longer control invasive controls on user’s devices or the services they utilize.

Is there a way for organizations to gain a clear picture of what transpired after a security breach?

  • Detailed monitoring and recording of interactions with content and systems. Predictive AI Baselining, Granular Forensics, Anomaly Detection and Threat Intelligence ability is needed to quickly identify what other users were targeted, what systems were potentially compromised and what information was exfiltrated.

How do you identify attacks without signature-based mechanisms?

  • Pervasive monitoring enables you to identify meaningful deviations from normal behavior to infer malicious intent. Nefarious traffic can be identified by correlating real-time threat feeds with current flows. Machine learning can be used to discover outliers and repeat offenders.

Summing up

Network security and network monitoring have gone a long way and jumped through all kinds of hoops to reach the point they have today. Unfortunately, through the years, cyber marketing has surpassed cyber solutions and we now have misconceptions that can do considerable damage to an organization.

The biggest threat is always the one you cannot see and hits you the hardest once it has established itself slowly and comfortably in a network undetected. Complete visibility can only be accessed through 100% collection and retention of all data traversing a network, otherwise even a single blindspot will affect the entire organization as if it were never protected to begin with. Just like a single weak link in a chain, cyber criminals will find the perfect access point for penetration.

Inadequate solutions that only retain the top 2% or 5% of network communications frequently cause false positives and red herrings. You need to have 100% access to your comms data for Full Visibility, but how can you be sure that you will?

You need free access to Full Visibility to unlock all your data and an Intelligent Predictive AI technology that can autonomously and quickly identify what’s not normal at both the macro and micro level of your network, cloud, servers, iot devices and other network connected assets.

Get complete visibility wiith CySight now –>>>

5 Ways Flow Based Network Monitoring Solutions Need to Scale

Partial Truth Only Results in Assumptions

A common gripe for Network Engineers is that their current network monitoring solution doesn’t provide the depth of information needed to quickly ascertain the true cause of a network issue. Imagine reading a book that is missing 4 out of every 6 words, understanding the context will be hopeless and the book has near to no value. Many already have over-complicated their monitoring systems and methodologies by continuously extending their capabilities with a plethora of add-ons or relying on disparate systems that often don’t interface very well with each other. There is also an often-mistaken belief that the network monitoring solutions that they have invested in will suddenly give them the depth they need to have the required visibility to manage complex networks.

A best-value approach to NDR, NTA and general network monitoring is to use a flow-based analytics methodology such as NetFlow, sFlow or IPFIX.

The Misconception & What Really Matters

In this market, it’s common for the industry to express a flow software’s scaling capability in flows-per-second. Using Flows-per-second as a guide to scalability is misleading as it is often used to hide a flow collector’s inability to archive flow data by overstating its collection capability and enables them to present a larger number considering they use seconds instead of minutes. It’s important to look not only at flows-per-second but to understand the picture created once all the elements are used together. Much like a painting of a detailed landscape, the finer the brush and the more colors used will ultimately provide the complete and truly detailed picture of what was being looked at when drawing the landscape.

Granularity is the prime factor to start focusing on, specifically referring to granularity retained per minute (flow retention rate). Naturally, speed impediment is a significant and critical factor to be aware of as well. The speed and flexibility of alerting, reporting, forensic depth, and diagnostics all play a strategic role but will be hampered when confronted with scalability limitations. Observing the behavior when impacted by high-flow-variance or sudden-bursts and considering the number of devices and interfaces can enable you to appreciate the absolute significance of scalability in producing actionable insights and analytics.  Not to mention the ability to retain short-term and historical collections, which provide vital trackback information, would be nonexistent. To provide the necessary visibility to accomplish the ever-growing number of tasks analysts and engineers must deal with daily along with resolving issues to completion, NDR, NTA and general Network Monitoring System (NMS) must have the ability to scale in all its levels of consumption and retention.

How Should Monitoring Solutions Scale?

Flow-Based Network Detection and Response (NDR) / Network Traffic Analysis (NTA) software needs to scale in its collection of data in five ways:

Ingestion Capability – Also referred to as Collection, means the number of flows that can be consumed by a single collector. This is a feat that most monitoring solutions are able to accomplish, unfortunately, it is also the one they pride themselves on. It is an important ability but is only the first step of several crucial capabilities that will determine the quality of insights and intelligence of a monitoring system. Ingestion is only the ability to take in data, it does not mean “retention”, and therefore could do very little on its own.

Digestion Capability – Also referred to as Retention, means the number of flow records that can be retained by a single collector. The most overlooked and difficult step in the network monitoring world. Digestion / Flow retention rates are particularly critical to quantify as they dictate the level of granularity that allows a flow-based NMS to deliver the visibility required to achieve quality Predictive AI Baselining, Anomaly Detection, Network Forensics, Root Cause Analysis, Billing Substantiation, Peering Analysis, and Data Retention compliance. Without retaining data, you cannot inspect it beyond the surface level, losing the value of network or cloud visibility.

Multitasking Processes– Pertains to the multitasking strength of a solution and its ability to scale and spread a load of collection processes across multiple CPUs on a single server.  This seems like an obvious approach to the collection but many systems have taken a linear serial approach to handle and ingest multiple streams of flow data that don’t allow their technologies to scale when new flow generating devices, interfaces, or endpoints are added forcing you to deploy multiple instances of a solution which becomes ineffective and expensive.

Clustered Collection – Refers to the ability of a flow-based solution to run a single data warehouse that takes its input from a cluster of collectors as a single unit as a means to load balance. In a large environment, you mostly have very large equipment that sends massive amounts of data to collectors. In order to handle all that data, you must distribute the load amongst a number of collectors in a cluster to multiple machines that make sense of it instead of a single machine that will be overloaded. This ability enables organizations to scale up in data use instead of dropping it as they attempt to collect it.

Hierarchical Correlation – The purpose of Hierarchical correlation is to take information from multiple databases and aggregate them into a single Super SIEM. With the need to consume and retain huge amounts of data, comes the need to manage and oversee that data in an intelligent way. Hierarchical correlation is designed to enable parallel analytics across distributed data warehouses to aggregate their results. In the field of network monitoring, getting overwhelmed with data to the point where you cannot find what you need is a as useful as being given all the books in the world and asked a single question that is answered in only one.

Network traffic visibility is considerably improved by reducing network blindspots and providing qualified sources and reasons of communications that impair business continuity.The capacity to capture flow at a finer level allows for new Predictive AI Baselining and Machine Learning application analysis and risk mitigation.

There are so many critical abilities that a network monitoring solution must enable its user, all are affected by whether or not the solution can scale.

Visibility is a range and not binary, you do not have or don’t have visibility, its whether you have enough to achieve your goals and keep your organization productive and safe.

How to Use a Network Behavior Analysis Tool to Your Advantage

How to Use a Network Behavior Analysis Tool to Your Advantage

Cybersecurity threats can come in many forms. They can easily slip through your network’s defenses if you let your guard down, even for a second. Protect your business by leveraging network behavior analysis (NBA). Implementing behavioral analysis tools helps organizations detect and stop suspicious activities within their networks before they happen and limit the damage if they do happen.

According to Accenture, improving network security is the top priority for most companies this 2021. In fact, the majority of them have increased their spending on network security by more than 25% in the past months. 

With that, here are some ways to use network behavior anomaly detection tools to your advantage.

1.     Leverage artificial intelligence

Nowadays, you can easily leverage artificial intelligence (AI) and machine learning (ML) in your network monitoring. In fact, various software systems utilize  AI diagnostics to enhance the detection of any anomalies within your network. Through its dynamic machine learning, it can quickly learn how to differentiate between normal and suspicious activities.

AI-powered NBA software can continuously adapt to new threats and discover outliers without much interference from you. This way, it can provide early warning on potential cyberattacks before they can get serious. This can include DDoS, Advanced Persistent Threats, and Anomalous traffic.

Hence, you should consider having AI diagnostics as one of your network behavior analysis magic quadrants.

2.           Take advantage of its automation

One of the biggest benefits of a network anomaly detection program is helping you save time and labor in detecting and resolving network issues. It is constantly watching your network, collecting data, and analyzing activities within it. It will then notify you and your network administrators of any threats or anomalies within your network.

Moreover, it can automatically mitigate some security threats from rogue applications to prevent sudden downtimes. It can also eliminate blind spots within your network security, fortifying your defenses and visibility. As a result, you or your administrators can qualify and detect network traffic passively.

3.           Utilize NBA data and analytics

As more businesses become data-driven, big data gains momentum. It can aid your marketing teams in designing better campaigns or your sales team in increasing your business’ revenues. And through network behavior analysis, you can deep-mine large volumes of data from day-to-day operations.

For security engineers, big data analytics has become an effective defense against network attacks and vulnerabilities. It can give them deeper visibility into increasingly complex and larger network systems. 

Today’s advanced analytics platforms are designed to handle and process larger volumes of data. Furthermore, these platforms can learn and evolve from such data, resulting in stronger network behavior analytics and local threat detection.

4.           Optimize network anomaly detection

A common issue with network monitoring solutions is their tendency to overburden network and security managers with false-positive readings. This is due to the lack of in-depth information to confirm the actual cause of a network issue. Hence, it is important to consistently optimize your network behavior analysis tool.

One way to do this is to use a flow-based analytics methodology for your network monitoring. You can do so with software like CySight, which uses artificial intelligence to analyze, segment, and learn from granular telemetry from your network infrastructure flows in real-time. It also enables you to configure and fine-tune your network behavior analysis for more accurate and in-depth monitoring.

5.           Integrate with other security solutions

Enhance your experience with your network behavior analytics tool by integrating it with your existing security solutions, such as prevention technology (IPS) systems, firewalls, and more. 

Through integrations, you can cross-analyze data between security tools for better visibility and more in-depth insights on your network safety. Having several security systems working together at once means one can detect or mitigate certain behaviors that are undetectable for the other. This also ensures you cover all the bases and leave no room for vulnerabilities in your network.

Improving network security

As your business strives towards total digital transformation, you need to start investing in your network security. Threats can come in many forms. And once it slips past your guard, it might just be too late.

Network behavior analysis can help fortify your network security. It constantly monitors your network and traffic and notifies you of any suspicious activities or changes. This way, you can immediately mitigate any potential issues before they can get out of hand. Check out CySight to know more about the benefits of network behavior analysis.

But, of course, a tool can only be as good as the people using it. Hence, you must make sure that you hire the right people for your network security team. Consider recruiting someone with an online software engineering masters to help you strengthen your network.


Ref: Accenture Report

Scalable NetFlow – 3 Key Questions to Ask Your NetFlow Vendor

Why is flows per second a flawed way to measure a netflow collector’s capability?

Flows-per-second is often considered the primary yardstick to measure the capability of a netflow analyzers flow capture (aka collection) rate.

This seems simple on its face. The more flows-per-second that a flow collector can consume, the more visibility it provides, right? Well, yes and no.

The Basics

NetFlow was originally conceived as a means to provide network professionals the data to make sense of the traffic on their network without having to resort to expensive per segment based packet sniffing tools.

A flow record contains at minimum the basic information pertaining to a transfer of data through a router, switch, firewall, packet tap or other network gateway. A typical flow record will contain at minimum: Source IP, Destination IP, Source Port, Destination Port, Protocol, Tos, Ingress Interface and Egress Interface. Flow records are exported to a flow collector where they are ingested and information orientated to the engineers purposes are displayed.

Measurement

Measurement has always been how the IT industry expresses power and competency. However, a formula used to reflect power and ability changes when a technology design undergoes a paradigm shift.

For example, when expressing how fast a computer is we used to measure the CPU clock speed. We believed that the higher the clock speed the more powerful the computer. However, when multi-core chips were introduced the CPU power and speed dropped but the CPU in fact became more powerful. The primary clock speed measurement indicator became secondary to the ability to multi-thread.

The flows-per-second yardstick is misleading as it incorrectly reflects the actual power and capability of a flow collector to capture and process flow data and it has become prone to marketing exaggeration.

Flow Capture Rate

Flow capture rate ability is difficult to measure and to quantify a products scalability. There are various factors that can dramatically impact the ability to collect flows and to retain sufficient flows to perform higher-end diagnostics.

Its important to look not just at flows-per-second but at the granularity retained per minute (flow retention rate), the speed and flexibility of alerting, reporting, forensic depth and diagnostics and the scalability when impacted by high-flow-variance, sudden-bursts, number of devices and interfaces, the speed of reporting over time, the ability to retain short-term and historical collections and the confluence of these factors as it pertains to scalability of the software as a whole.

Scalable NetFlow and flow retention rates are particularly critical to determine as appropriate granularity is needed to deliver the visibility required to perform Anomaly Detection, Network Forensics, Root Cause Analysis, Billing substantiation, Peering Analysis and Data retention compliance.

The higher the flows-per-second and the flow-variance the more challenging it becomes to achieve a high flow-retention-rate to archive and retain flow records in a data warehouse.

A vendors capability statement might reflect a high flows-per-second consumption ability but many flow software tools have retention rate limitations by design.

It can mean that irrespective of achieving a high flow collection rate the netflow analyzer might only be capable of physically archiving 500 flows per minute. Furthermore, these flows are usually the result of sorting the flow data by top bytes to identify Top 10bandwidth abusers. Netflow products of this kind can be easily identified because they often tend to offer benefits orientated primarily to identifying bandwidth abuse or network performance monitoring.

Identifying bandwidth abusers is of course a very important benefit of a netflow analyzer. However, it has a marginal benefit today where a large amount of the abuse and risk is caused by many small flows.

These small flows usually fall beneath the radar screen of many netflow analysis products.  Many abuses like DDoS, p2p, botnets and hacker or insider data exfiltration continue to occur and can at minimum impact the networking equipment and user experience. Lack of ability to quantify and understand small flows creates great risk leaving organizations exposed.

Scalability

This inability to scale in short-term or historical analysis severely impacts a flow monitoring products ability to collect and retain critical information required in todays world where copious data has created severe network blind spots.

To qualify if a tool is really suitable for the purpose, you need to know more about the flows-per-second collection formula being provided by the vendor and some deeper investigation should be carried out to qualify the claims.

 

With this in mind here are 3 key questions to ask your NetFlow vendor to understand what their collection scalability claims really mean:

  1. How many flows can be collected per second?

  • Qualify if the flows per second rate provided is a burst rate or a sustained rate.
  • Ask how the collection and retention rates might be affected if the flows have high-flow variance (e.g. a DDoS attack).
  • How is the collection, archiving and reporting impacted when flow variance is increased by adding many devices and interfaces and distinct IPv4/IPv6 conversations and test what degradation in speed can you expect after it has been recording for some time.
  • Ask how the collection and retention rates might change if adding additional fields or measurements to the flow template (e.g. MPLS, MAC Address, URL, Latency)
  • How many flow records can be retained per minute?

  • Ask how the actual number of records inserted into the data warehouse per minute can be verified for short-term and historical collection.
  • Ask what happens to the flows that were not retained.
  • Ask what the flow retention logic is. (e.g. Top Bytes, First N)
  • What information granularity is retained for both short-term and historically?
    • Does the datas time granularity degrade as the data ages e.g. 1 day data retained per minute, 2 days retained per hour 5 days retained per quarter
    • Can you control the granularity and if so for how long?

 

Remember – Rate of collection does not translate to information retention.

Do you know whats really stored in the software’s database? After all you can only analyze what has been retained (either in memory or on disk) and it is that information retention granularity that provides a flow products benefits.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

5 Perks of Network Performance Management

Network performance management is something that virtually every business needs, but not something that every business is actively doing, or even aware of.  And why should they?

While understanding the technical side of things is best left to the IT department, understanding the benefits of a properly managed network is something that will help get the business managers on board, especially when good performance management solutions might be a cost that hadn’t been considered.  So what are the benefits?

1.  Avoiding downtime – Downtime across an entire network is going to be rare, but downtime in small areas of the network are possible if it gets overloaded.  Downtime of any kind is just not something that business can tolerate, for a few reasons:

  • it leaves that area of the network unmonitored, which is a serious security issue
  • shared files won’t be accessible, nor will they be updating as users save the files.  This will lead to multiple versions of the same file, and quite a few headaches when the network is accessible again
  • downtime that affects customers is even worse, and can result in lost revenue or negative customer experiences

2.  Network speed – This is one of the most important and easily quantified aspects of managing netflow.  It will affect every user on the network constantly, and anything that slows down users means either more work hours or delays.  Obviously, neither of these is a good problem to have.  Whether it’s uploading a file, sending a file to a coworker, or sending a file to a client; speed is of paramount importance.

3.  Scalability – Almost every business wants to grow, and nowhere is that more true than the tech sector.  As the business grows, the network will have to grow with it to support more employees and clients.  By managing the performance of the network, it is very easy to see when or where it is being stretched too thin or overwhelmed.  As performance degrades, it’s very easy to set thresholds that show when the network need upgraded or enlarged.

4.  Security – Arguably the most important aspect of network management, even though it might not be thought of as a performance aspect.  An unsecured network is worse than a useless network, and data breaches can ruin a company.  So how does this play into performance management?

By monitoring netflow performance, it’s easy to see where the most resources are being used.  Many security attacks drain resources, so if there are resource spikes in unusual areas it can point to a security flaw.  With proper software, these issues can be not only monitored, but also recorded and corrected.

5.  Usability – Unfortunately, not all employees have a working knowledge of how networks operate.  In fact, as many in IT support will attest, most employees aren’t tech savvy.  However, most employees will need to use the network as part of their daily work.  This conflict is why usability is so important.  The easiest way to minimize training costs with any network management program is to ensure that it is as user-friendly as possible.

The fanciest, most impressive network performance management system isn’t worth anything if no one knows how to use and optimize it properly.  Even if the IT department has no issues with it, the reports and general information should be as easy to decipher as possible.

Is your network as optimized as it could be?  Are you able to monitor the network’s performance and flow,  or do network forensics to determine where issues are?  Don’t try to tackle all of this on your own, contact us and let us help you support your business with the best network monitoring for your specific needs.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

5 Benefits of NetFlow Performance Monitoring

NetFlow can provide the visibility that reduces the risks to business continuity associated with poor performance, downtime and security threats. As organizations continue to rely more and more on computing power to run their business, transparency of business applications across the network and interactions with external systems and users is critical and NetFlow monitoring simplifies detection, diagnosis, and resolution of network issues.

Traditional SNMP tools are too limited in their collection capability and the data extracted is very coarse and does not provide sufficient visibility. Packet capture tools can extract high granularity from network traffic but are dedicated to a port mirror or a per segment capture and are built for specialized short term captures rather than long historical analyses as well as packet based Predictive AI Baselining analytics tools traditionally do not provide the ability to perform forensic analysis of various sub-sections or aggregated views and therefore also suffer from a lack of visibility and context.

By analyzing NetFlow data, a network engineer can discover the root cause of congestion and find out the users, applications and protocols that are causing interfaces to exceed utilization. They can determine the Type or Class of Service (QoS) and group with application mapping such as Network Based Application Recognition (NBAR) to identify the type of traffic such as when peer-to-peer traffic tries to hide itself by using web services and so on.

NetFlow allows granular and accurate traffic measurements as well as high-level aggregated traffic collection that can assist in identifying excessive bandwidth utilization or unexpected application traffic. NetFlow enables networks to perform IP traffic flow analyses without deploying external probes, making traffic Predictive AI Baselining analytics cheap to deploy even on large network environments.

Understanding the benefits of a properly managed network might be a cost that hasn’t been considered by business managers.  So what are the benefits of NetFlow performance monitoring?

Avoiding Downtime

Service outages can cause simple frustrations such as issues when saving opened files to a suddenly inaccessible server or cause delays to business process can impact revenue. Downtime that affects customers can result in negative customer experiences and lost revenue.

Network Speed

Slow network links cause severe user frustration that impacts productivity and causes delays.

Scalability

As the business grows, the network needs to grow with it to support more computing processes, employees and clients. As performance degrades, it can be easy to set thresholds that show when, where and why the network is exceeding it’s capability. By having historical performance of the network, it is very easy to see when or where it is being stretched too thin or overwhelmed and to easily substantiate upgrades.

Security

Security Predictive AI Baselining analytics is arguably the most important aspect of network management, even though it might not be thought of as a performance aspect. By monitoring NetFlow performance, it’s easy to see where the most resources are being used. Many security attacks drain resources, so if there are resource spikes in unusual areas it can point to a security flaw. With advanced NetFlow diagnostics software, these issues can be not only monitored, but also recorded and corrected.

Peering

NetFlow Predictive AI Baselining analytics allows organizations to embrace progressive IT trends to analyze peering paths, or virtualized paths such as Multiprotocol Label Switching (MPLS), Virtual Local Area Networks (VLAN), make use of IPv6, Next Hops, BGP, MAC Addresses or link internal IP’s to Network Address Translated (NAT) IP addresses.

MPLS creates new challenges when it comes to network performance monitoring and security with MPLS as communication can occur directly without passing through an Intrusion Detection System (IDS). Organizations have two options to address the issue: Implement a sensor or probe at each site, which is costly and fraught with management hassles; or turn on NetFlow at each MPLS site and send MPLS flows to a NetFlow collector that becomes a very cost-effective option because it can make use of existing network infrastructure.

With NetFlow, network performance issues are easily resolved because engineers are given in-depth visibility that enables troubleshooting – all the way down the network to a specific user. The solution starts by presenting the interfaces being used.

An IT engineer can click on the interfaces to determine the root cause of a usage spike, as well as the actual conversation and host pairs that are dominating bandwidth and the associated user information. Then, all it takes is a quick phone call to the end-user, instructing them to shut down resource-hogging activities, as they’re impairing network performance.

IT departments are tasked with responding to end-user complaints, which often relate to network sluggishness. However, many times an end-user’s experience has more to do with an application or server failing to respond within the expected time period that can point to a Latency, Quality of Service (QoS) or server issue. Traffic Latency can be analyzed if supported in the NetFlow version. Statistics such as Round Trip Time (RTT) and Server Response Time (SRT) can be diagnosed to identify whether a network problem is due to an application or service experiencing slow response times.

Baselines of RTT and SRT can be learned to determine whether response time is consistent or is spiking over a time period. By observing the Latency over time alarms can be generated when Latency increases above the normal threshold.

TCP Congestion flags can also serve as an early warning system where Latency is unavailable.

NetFlow application performance monitoring provides the context around an event or anomaly within a host to quickly identify the root cause of issues and improve Mean Time To Know (MTTK) and Mean Time To Repair (MTTR).

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

What is NetFlow & How Can Organizations Leverage It?

NetFlow is a feature originally introduced on Cisco devices (but now generally available on many vendor devices) which provides the ability for an organization to monitor and collect IP network traffic entering or exiting an interface.
Through analysis of the data provided by NetFlow, a network administrator is able to detect things such as the source and destination of traffic, class of service, and the causes of congestion on the network.

NetFlow is designed to be utilized either from the software built into a router/switch or from external probes.

The purpose of NetFlow is to provide an organization with information about network traffic flow, both into and out of the device, by analyzing the first packet of a flow and using that packet as the standard for the rest of the flow. It has two variants which are designed to allow for more flexibility when it comes to implementing NetFlow on a network.

NetFlow was originally developed by Cisco around 1990 as a packet switching technology for Cisco routers and implemented in IOS 11.x.

The concept was that instead of having to inspect each packet in a “flow”, the device need only to inspect the first packet and create a “NetFlow switching record” or alternatively named “route cache record”.

After that that record was created, further packets in the same flow would not need to be inspected; they could just be forwarded based on the determination from the first packet. While this idea was forward thinking, it had many drawbacks which made it unsuitable for larger internet backbone routers.

In the end, Cisco abandoned that form of traffic routing in favor of “Cisco Express Forwarding”.

However, Cisco (and others) realized that by collecting and storing / forwarding that “flow data” they could offer insight into the traffic that was traversing the device interfaces.

At the time, the only way to see any information about what IP addresses or application ports were “inside” the traffic was to deploy packet sniffing systems which would sit inline (or connected to SPAN/Mirror) ports and “sniff” the traffic.  This can be an expensive and sometimes difficult solution to deploy.

Instead, by exporting the NetFlow data to an application which could store / process / display the information, network managers could now see many of the key meta-data aspects of traffic without having to deploy the “sniffer” probes.

Routers and switches which are NetFlow-capable are able to collect the IP traffic statistics at all interfaces on which NetFlow is enabled. This information is then exported as NetFlow records to a NetFlow collector, which is typically a server doing the traffic analysis.

There are two main NetFlow variants: Security Event Logging and Standalone Probe-Based Monitoring.

Security Event Logging was introduced on the Cisco ASA 5580 products and utilizes NetFlow v9 fields and templates. It delivers security telemetry in high performance environments and offers the same level of detail in logged events as syslog.

Standalone Probe-Based Monitoring is an alternative to flow collection from routers and switches and uses NetFlow probes, allowing NetFlow to overcome some of the limitations of router-based monitoring. Dedicated probes allow for easier implementation of NetFlow monitoring, but probes must be placed at each link to be observed and probes will not report separate input and output as a router will.

An organization or company may implement NetFlow by utilizing a NetFlow-capable device. However, they may wish to use one of the variants for a more flexible experience.

By using NetFlow, an organization will have insight into the traffic on its network, which may be used to find sources of congestion and improve network traffic flow so that the network is utilized to its full capability.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Seven Reasons To Analyze Network Traffic With NetFlow

NetFlow allows you to keep an eye on traffic and transactions that occur on your network. NetFlow can detect unusual traffic, a request for a malicious destination or a download of a larger file. NetFlow analysis helps you see what users are doing, gives you an idea of how your bandwidth is used and can help you improve your network besides protecting you from a number of attacks.

There are many reasons to analyze network traffic with NetFlow, including making your system more efficient as well as keeping it safe. Here are some of the reasons behind many organizations  adoption of NetFlow analysis:

  • Analyze all your network NetFlow allows you to keep track of all the connections occurring on your network, including the ones hidden by a rootkit. You can review all the ports and external hosts an IP address connected to within a specific period of time. You can also collect data to get an overview of how your network is used.

 

  • Track bandwidth use. You can use NetFlow to track bandwidth use and see reports on the average use of This can help you determine when spikes are likely to occur so that you can plan accordingly. Tracking bandwidth allows you to better understand traffic patterns and this information can be used to identify any unusual traffic patterns. You can also easily identify unusual surges caused by a user downloading a large file or by a DDoS attack.

 

  • Keep your network safe from DDoS attacks. These attacks target your network by overloading your servers with more traffic than they can handle. NetFlow can detect this type of unusual surge in traffic as well as identify the botnet that is controlling the attack and the infected computers following the botnet’s order and sending traffic to your network. You can easily block the botnet and the network of infected computers to prevent future attacks besides stopping the attack in progress.

 

  • Protect your network from malware. Even the safest network can still be exposed to malware via users connecting from home or via people bringing their mobile device to work. A bot present on a home computer or on a Smartphone could access your network but NetFlow will detect this type of abnormal traffic and with auto-mitigation tools automatically block it.
  • Optimize your cloud. By tracking bandwidth use, NetFlow can show you which applications slow down your cloud and give you an overview of how your cloud is used. You can also track performances to optimize your cloud and make sure your cloud service provider is offering a cloud solution that corresponds to what they advertised.
  • Monitor users. Everyone brings their own Smartphone to work nowadays and might use it for purposes other than work. Company data may be accessible by insiders who have legitimate access but have an inappropriate agenda downloading and sharing sensitive data with outside sources. You can keep track of how much bandwidth is used for data leakage or personal activities, such as using Facebook during work hours.
  • Data Retention Compliance. NetFlow can fill in the gaps where other technologies cannot deliver. A well-architected NetFlow solution can help business and service providers to achieve and maintain data retention compliance for a wide range of government and industry regulations.

NetFlow is an easy way to monitor your network and provides you with several advantages, including making your network safer and collecting the data you need to optimize it. Having access to a comprehensive overview of your network from a single pane of glass makes monitoring your network easy and enables you to check what is going on with your network with a simple glance.

CySight solutions takes the extra step to make life far easier for the network and security professional with smart alerts, actionable network intelligence, scalability and automated diagnostics and mitigation for a complete technology package.

CySight can provide you with the right tools to analyze traffic, monitor your network, protect it and optimize it. Contact us  to learn more about NetFlow and how you can get the most out of this amazing tool.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

3 Ways Anomaly Detection Enhances Network Monitoring

With the increasing abstraction of IT services beyond the traditional server room computing environments have evolved to be more efficient and also far more complex. Virtualization, mobile device technology, hosted infrastructure, Internet ubiquity and a host of other technologies are redefining the IT landscape.

From a cybersecurity standpoint, the question is how to best to manage the growing complexity of environments and changes in network behavior with every introduction of new technology.

In this blog, we’ll take a look at how anomaly detection-based systems are adding an invaluable weapon to Security Analysts’ arsenal in the battle against known – and unknown – security risks that threaten the stability of today’s complex enterprise environments.

Put your network traffic behavior into perspective

By continually analyzing traffic patterns at various intersections and time frames, performance and security baselines can be established, against which potential malicious activity is monitored and managed. But with large swathes of data traversing the average enterprise environment at any given moment, detecting abnormal network behavior can be difficult.

Through filtering techniques and algorithms based on live and historical data analysis, anomaly detection systems are capable of detecting even the most subtly crafted malicious software that may pose as normal network behavior. Also, anomaly-based systems employ machine-learning capabilities to learn about new traffic as it is introduced and provide greater context to how data traverses the wire, thus increasing its ability to identify security threats as they are introduced.

Netflow is a popular tool used in the collection of network traffic for building accurate performance and cybersecurity baselines with which to establish normal network activity patterns from potentially alarming network behavior.

Anomaly detection places Security Analysts on the front foot

An anomaly is defined as an action or event that is outside of the norm. But when a definition of what is normal is absent, loopholes can easily be exploited. This is often the case with signature-based detection systems that rely on a database of pre-determined virus signatures that are based on known threats. In the event of a new and yet unknown security threat, signature-based systems are only as effective as their ability to respond to, analyze and neutralize such new threats.

Since signatures do work well against known attacks, they are by no means paralyzed against defending your network. Signature-based systems lack the flexibility of anomaly-based systems in the sense that they are incapable of detecting new threats. This is one of the reasons signature-based systems are typically complemented by some iteration of a flow based anomaly detection system.

Anomaly based systems are designed to grow alongside your network

The chief strength behind anomaly detection systems is that they allow Network Operation Centers (NOCs) to adapt their security apparatus according to the demands of the day. With threats growing in number and sophistication, detection systems that can discover, learn about and provide preventative methodologies  are the ideal tools with which to combat the cybersecurity threats of tomorrow. NetFlow Anomaly detection with automated diagnostics does exactly this by employing machine learning techniques to network threat detection and in so doing, automating much of the detection aspect of security management while allowing Security Analysts to focus on the prevention aspect in their ongoing endeavors to secure their information and technological investments.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Two Ways Networks Are Transformed By NetFlow

According an article in techtarget.com “Your routers and switches can yield a mother lode of information about your network–if you know where to dig.”  The article goes on to say that excavating and searching through endless traffic data and logs manufactured by your network system is a lot like mining for gold, and punching random holes to look for a few nuggets of information isn’t very efficient. Your search will be much more fruitful if you know where to look and what it will look like. Fortunately, the data generated by a NetFlow traffic reporting protocol yields specific information and you can easily sort, view and analyze the information into what you want to use or need.In contemporary networks, there is a need to collect and retain a good set of traffic records for several different purposes. These include the ability to monitor traffic for network planning, security and analysis as well as track traffic usage for billing purposes. Every business experiences network problems. The goal is to transform these “bad behaving” networks by investigating the data that is being generated by the routers, switches and other hardware that make up the system.

  • Trace and repair network misconfigurations

Problems with networks can run the gamut from mismatched applications and hardware to wireless access points opened to accommodate BYOD users and other business uses. While there is always talk about software flaws and news about the latest internet threat, those things often distract IT pros from the real, every-day threat of unstable networks that have been configured to accommodate legacy hardware and a multitude of software applications.

The increasing complexity of the Internet itself, with the interconnection of lots of different devices and device types adds to the challenge of operating a computer network. Even though developing protocols to respond to unpredicted failures and misconfigurations is a workable solution, these out-of-date configurations can still cause frequent problems and denial of service (DOS). With many modern network devices monitoring functions and gathering data, retrieving and utilizing the NetFlow information makes tracing and repairing the problem of misconfigurations possible, easier and efficient.

  • Detect security breaches

There are many uses for NetFlow but one of the most important is the benefit of network security. This quote from an article by Wagner and Bernhard, describing worm and anomaly detection in fast IP networks, bears out the security problems facing governments, businesses, and internet users today.

“Large-scale network events, such as outbreaks of a fast Internet worm are difficult to detect in real-time from observed traffic, even more so if the observed network carries a large amount of traffic. Even during worm outbreaks, actual attack traffic is only a small fraction of the overall traffic. Its precise characteristics are usually unknown beforehand, making direct matching hard. It is desirable to derive parameters that describe large amounts of traffic data in such a way that details are hidden, but the type of changes associated with worm outbreaks and other network events are visible.”

NetFlow provides a 24/7 account of all network activity. There is an “unblinking” eye observing anything and everything that happens within the network boundaries. All the data needed to identify and enact a clean-up is recorded in the flow and this is invaluable to a security pro trying to reduce the impact of a breach in the network. NetFlow provides a visible, “what’s happening right now” view that other systems cannot provide. Most security systems alert after something has been detected, while NetFlow is constantly gathering information even when things seem to be just fine. In addition, NetFlow-based analysis relies on traffic behavior and algorithms which provides rapid detection of breaches that other technologies often miss

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

How Traffic Accounting Keeps You One Step Ahead Of The Competition

IT has steadily evolved from a service and operational delivery mechanism to a strategic business investment. Suffice it to say that the business world and technology have become so intertwined that it’s unsurprising many leading companies within their respective industries attribute their success largely to their adoptive stance toward innovation.

Network Managers know that much of their company’s ability to outmaneuver the competition depends to a large extent on IT Ops’ ability to deliver world-class services. This brings traffic accounting into the conversation, since a realistic and measured view of your current and future traffic flows is central to building an environment in which all the facets involved in its growth, stability and performance are continually addressed.

In this blog, we’ll take a look at how traffic accounting places your network operations center (NOC) team on the front-foot in their objective to optimize the flow of your business’ most precious cargo – its data.

All roads lead to performance baselining 

Performance baselines lay the foundation for network-wide traffic accounting against predetermined environment thresholds. They also aid IT Ops teams in planning for network growth and expansion undertakings. Baseline information typically contains statistics on network utilization, traffic components, conversation and address statistics, packet information and key device metrics.

It serves as your network’s barometer by informing you when anomalies such as excessive bandwidth consumption and other causes of bottlenecks occur. For example, root causes to performance issues can easily creep into an environment unnoticed: such as a recent update to a business critical application that may cause significant spikes in network utilization.  Armed with a comprehensive set of baseline statistics and data that allows Network Performance and Security Specialists to measure, compare and analyze network metrics,   root causes such as these can be identified with elevated efficiency.

In broader applications, baselining gives Network Engineers a high-level view of their environments, thereby allowing them to configure Quality of Service (QoS) parameters, plan for upgrades and expansions, detect and monitor trends and peering analysis and a bevy of other functions.

Traffic accounting brings your future network into focus

With new-generation technologies such as the cloud, resource virtualization, as a service platforms and mobility revolutionizing the networks of yesteryear, capacity planning has taken on a new level of significance. Network monitoring systems (NMS) need to meet the demands of the new, complex, hybrid systems that are the order of the day. Thankfully, technologies such as NetFlow have evolved steadily over the years to address the monitoring demands of modern networks. NetFlow accounting is a reliable way to peer through the wire and get a deeper insight to the traffic that traverses your environment. Many Network Engineers and Security Specialists will agree that their understanding of their environments hinges on the level of insight they glean from their monitoring solutions.

This makes NetFlow an ideal traffic accounting medium, since it easily collects and exports data from virtually any connected device for analysis by a CySight . The technology’s standing in the industry has made it the “go-to” solution for curating detailed, insightful and actionable metrics that move IT organizations from a reactive to proactive stance towards network optimization

Traffic accounting’s influence on business productivity and performance

As organizations become increasingly technology-centric in their business strategies, their reliance on networks that consistently perform at peak will increase accordingly. This places new pressures on Network Performance and Security Teams  to conduct iterative performance and capacity testing to contextualize their environment’s ability to perform when it matters most. NetFlow’s ability to provide contextual insights based on live and historic data means Network Operation Centers (NOCs)  are able to react to immediate performance hindrances and also predict with a fair level of accuracy what the challenges of tomorrow may hold. And this is worth gold in the context of the ever-changing and expanding networking landscape.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Benefits of a NetFlow Performance Deployment in Complex Environments

Since no two environments are identical and no network remains stagnant in Network Monitoring today, the only thing we can expect is the unexpected!

The network has become a living dynamic and complex environment that requires a flexible approach to monitor and analyze. Network and Security teams are under pressure to go beyond simple monitoring techniques to quickly identify the root causes of issues, de-risk hidden threats and to monitor network-connected things.

A solution’s flexibility refers to not only its interface but also the overall design.

From a user interface perspective, flexibility refers to the ability to perform analysis on any combination of data fields with multiple options to view, sort, cut and count the analysis.

From a deployment perspective, flexibility means options for deployment on Linux or Windows environments and the ability to digest all traffic or scale collection with tuning techniques that don’t fully obfuscate the data.

Acquiring flexible tools are a superb investment as they enrich and facilitate local knowledge retention. They enable multiple network centric teams to benefit from a shared toolset and the business begins to leverage the power of big data Predictive AI Baselining analytics that, over time, grows and extends beyond the tool’s original requirements as new information becomes visible.

What makes a Network Management System (NMS) truly scalable is its ability to analyze all the far reaches of the enterprise using a single interface with all layers of complexity to the data abstracted.

NetFlow, sFlow, IPFIX and their variants are all about abstracting routers, switches, firewalls or taps from multiple vendors into a single searchable network intelligence.

It is critical to ensure that abstraction layers are independently scalable to enable efficient collection and be sufficiently flexible to enable multiple deployment architectures to provide low-impact, cost-effective solutions that are simple to deploy and manage.

To simplify deployment and management it has to work out the box and be self-configuring and self-healing. Many flow monitoring systems require a lot of time to configure or maintain making them expensive to deploy and hard to use.

A flow-based NMS needs to meet various alerting, Predictive AI Baselining analytics, and architectural deployment demands. It needs to adapt to rapid change, pressure on enterprise infrastructure and possess the agility needed to adapt at short notice.

Agility in provisioning services, rectifying issues, customizing and delivering alerts and reports and facilitating template creation, early threat detection and effective risk mitigation, all assist in propelling the business forward and are the hallmarks of a flexible network management methodology.

Here are some examples that require a flexible approach to network monitoring:

  • DDoS attack behavior changes randomly
  • Analyze Interface usage by Device by Datacenter by Region
  • A new unknown social networking application suddenly becomes popular
  • Compliance drives need to discover Insider threats and data leakages occurring under the radar
  • Companies grow and move offices and functions
  • Laws change requiring data retention suitable for legal compliance
  • New processes create new unplanned pressures
  • New applications cause unexpected data surges
  • A vetted application creates unanticipated denials of service
  • Systems and services become infected with new kinds of malicious agents
  • Virtualization demands abruptly increase
  • Services and resources require a bit tax or 95th percentile billing model
  • Analyzing flexible NetFlow fields supported by different device vendors such as IPv6, MPLS, MAC, BGP, VPN, NAT paths, DNS, URL, Latency etc.
  • Internet of Things (IoT) become part of the network ecosystem and require ongoing visibility to manage

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

Turbocharged Ransomware Detection using NetFlow

Your network has already, or soon will, be infiltrated

To win the war on cyber extortion, you must first have visibility into your network and it is imperative to have the intelligent context to be able to find threats inside your data

Ransomware has become the most prevalent Trojan but other Trojans such as Spyware, Adware, Scareware, Malware, Worms, Viruses, and Phishing all play a role in delivering Ransomware to your Network, Server, Laptop, Phone, or IoT device and can in their own right be damaging.

CySight hunts them all, but in this article, OUR FOCUS IS ON RANSOMWARE and how to try to identify it before it causes financial and social damages.

Given what we already know and that more is still being learned, it makes good sense to investigate our unique solution.

What Is the Impact of Ransomware?

It’s not just your home laptop at risk, entire Enterprises can and are being held at Ransom – e.g. NotPetya Ransomware attack on Maersk required full re-install of 4000 servers, which they announced resulted in a loss of $300 million.

The spread and popularity of Ransomware, which is up from $11.5B in 2019 to $20B in 2020 and still rising, is outgrowing legacy solutions that cannot identify zero-day infiltration, at-risk interconnected systems, and related data exfiltration.

Attacks are set to have huge growth in 2021 and beyond!

The Ransomware Protection Racket?

Ransomware can be like the old analog world protection racket :

  • You pay once, but they’ll come back later asking for more.
  • You might pay but never get your data back.
  • They could give you the decryption key to get your data back, but sell the key to other hackers along with your corporate secrets.
  • They understand the value of reputation and wanting to keep breaches private.
  • They’ll go after especially important and critical infrastructure and services.
  • It is not just the enterprise that is at risk, but the interconnected components like ISPs and BYO personal devices.
  • The bigger you are the more the hackers think they can get and will try to!

A single infection could cost an organization thousands of dollars. (or millions!)

Evolved “double extortion”

It is important to take cognizance of the rise of double extortion attacks as criminals have come to realize that encrypting your files and stipulating a ransom to get back access to your data may be mitigated by backup strategies.

Decryption keys are good for business!

Like any good Protection Racket, Ransomware criminals understand that in order to make money they need to establish a certain decorum. By ensuring a customer can get a key to decrypt after paying the ransom they build a level of “trust” that it just takes money to get your files back. Pricing is usually set at a level where the Ransomer feels they can extract payment at the level the ransomed can afford. This allows continuity and repeat business.

When not paid impact reputation.

Hackers have also become experts in the art of Doxing which means gathering sensitive information about a target company or person and threatening to make it public if their terms are not met.

This has been a strategy for some time but it is becoming more prevalent for an attacker to exfiltrate a copy of the data as well as encrypting them and in that way prevents access to your data as well as having to be able to leverage your sensitive information going well beyond the simple lock and key protection racket and taking extortionware to a whole new level which can create years of ongoing demands.

Infrastructure, key servers, critical services.

As Ransomware progresses it will continue to exploit weaknesses in Infrastructures. Often those most vulnerable are those who believe they have the visibility to detect.

Ransomware is a long game often requiring other trojans or delivery methods to slowly infiltrate corporations of all types. They sleep, waiting for the right time to be activated when you least expect it.

ISP / Corporate / Industrial / SMB / Person

There are literally hundreds of ransomware variants targeted to both huge and sensitive corporate or government infrastructures that activate and encrypt on botnet instruction or when a set of circumstances activates the algorithms. They make use of payment gateways that are almost impossible to break and track.

It’s not all doom and gloom if you catch it early, it makes good sense to investigate our unique solution.

Threat Hunting (Ransomware)

The Postmortem Snowball Impacts!

When hunting for Ransomware there is often a snowball-like effect in terms of effort and impact.

You might start looking to answer questions like where the Ransomware came from, who did it, when did it happen, is there a patch to protect in future etc.

But you need to know more detail than that to judge your response;

  • The nature and classification of the threats are vital to know. e.g. is it scareware with no real impact? are they just trying to sell lousy protection software? Or is it real criminal intent and your data will be gone?
  • How serious is the Damage are we talking about and how widely has the problem spread?
  • What’s the cost to operations?
  • If you don’t remediate or pay, what the less quantifiable but very important reputation impact to your business?
  • After the fact, what employee re-training is needed?

  • What is the mindset of your Security organization?
    • Do they have all the traditional enterprise security measures in place and are ‘Certain’ they know everything (this is the worst-case scenario.)
    • Or are they aware of their ‘Limited’ ability to find ransomware, but don’t have the time or tools to deal with it.
    • Do they rely on backups, updates, and patching, which are also good practices but insufficient?
    • What if the hackers encrypt your backup drives?
    • Is the organization deluded are they ‘aware’  or understand they are ‘blind’

Answering all these questions takes more and more time and costly manpower, especially if you lack the tools to effectively undertake such threat hunting.

IN THE CURRENT INFECTIOUS CLIMATE WE’VE ALL BECOME SO SENSITIZED TO THE FACT THAT THE TINY RANSOMWARE AND TROJANS THAT WE DON’T SEE CAN POSE THE BIGGEST THREATS AND INVISIBLE DANGERS!

Deep insight into the granular nature of how systems, people process, applications, and things have communicated and are communicating is critical. In our attempts to discover hidden threats we need to deploy granular tools to collect, monitor, and make known the invisible dangers that can have real-world impacts.

It often overlooked but it is not a secret that in even well-known tools have serious shortcomings and are limited in their ability to retain complete records. They don’t really land up providing the visibility of the blindspots they alluded to. In fact, we found that in medium to large networking environments over 95% of network and cyber visibility tools struggle to retain as little as 2% to 5% of all information collected and this results in completely missed diagnostics, severely misleading analytics that cause misdiagnosis and risk!

YOU DON’T KNOW WHAT YOU DON’T KNOW!

AND IT’S WHAT WE DON’T SEE THAT POSES THE BIGGEST THREATS AND INVISIBLE DANGERS!

You may be surprised to learn that most tools lack the REAL Visibility that could have prevented attacks on a network and its local and cloud-connected assets. There are some serious shortcomings in the base designs of other flow solutions that result in their inability to scale in retention. This is why smart analysts are realizing that Threat Intelligence and Flow Analytics today are all about having access to long-term granular intelligence.

From a forensics perspective, you would appreciate that you can only analyze the data you retain, and with large and growing network and cloud data flows most tools (regardless of their marketing claims) actually cannot scale in retention and choose to drop records in lieu of what they believe is salient data.

Funnel_Loss_Plus_Text
Imputed outcome data leads to misleading results and missing data causes high risk and loss!

 

Big Data is heavy to store and lift.

We have seen many engineers try to build scripts to try to attain the missing visibility and do a lot of heavy lifting and then finally come to the realization that no matter how much lifting you do that if the data is not retained then you simply cannot analyze it.

Don’t get me wrong, we love the multitude of point solutions in our market that each tries to address a specific need – and there are a lot of them. DDoS detectors, End-Point threat discovery, Performance management, Email phishing detectors, Deep Packet Inspectors, and more.

DPI is a great concept but It is well known that Deep Packet Inspection (DPI) solutions struggle to maintain both a heavy traffic load and information extraction. They force customers to choose one or the other.

Each of these tools in their own right has value but they are difficult and expensive to integrate, maintain and train.

The data sources are often the same so using the right tool and an integrated approach for flow data allows SecOps, NetOps to reduce the cost overheads of maintaining multiple products and multiplies the value of each component.

Smart analysts are realizing that combining Network and Cyber Intelligence using Flow management today with the capability to access long-term granular intelligence is a seriously powerful enabler and a real game-changer when detecting Ransomware and finding exfiltration and related at-risk systems.

So how exactly do you go about turbocharging your Flow and Cloud metadata?

Our approach with CySight focuses on solving Cyber and Network Visibility using granular Collection and Retention with machine learning and A.I.

CySight was designed from the ground up with specialized metadata collection and retention techniques thereby solving the issues of archiving huge flow feeds in the smallest footprint and the highest granularity available in the marketplace.

Network issues are broad and diverse and can occur from many points of entry, both external and internal. The network may be used to download or host illicit materials and leak intellectual property. Additionally, ransomware and other cyber-attacks continue to impact businesses. So you need both machine learning and End-Point threats to provide a complete view of risk.

The Idea of flow-based analytics is simple yet potentially the most powerful tool to find ransomware and other network and cloud issues. All the footprints of all communications are sent in the flow data and given the right tools you could retain all the evidence of an attack or infiltration or exfiltration.

However, not all flow analytic solutions are created equal and due to the inability to scale in retention the Netflow Ideal becomes unattainable. For a recently discovered Ransomware or Trojan, such as “Wannacry”, it is helpful to see if it’s been active in the past and when it started.

Another important aspect is having the context to be able to analyze all the related traffic to identify concurrent exfiltration of an organization’s Intellectual Property and to quantify and mediate the risk. Threat hunting for RANSOMWARE requires multi-focal analysis at a granular level that simply cannot be attained by sampling methods. It does little good to be alerted to a possible threat without having the detail to understand context and impact. The Hacker who has control of your system will likely install multiple back-doors on various interrelated systems so they can return when you are off guard.

CySight Turbocharges Flow and Cloud analytics for SecOps and NetOps

As with all CySight analytics and detection, you don’t have to do any heavy lifting. We do it all for you!

There is no need to create or maintain special groups with Ransomware or other endpoints of ill-repute. Every CySight instance is built to keep itself aware of new threats that are automatically downloaded in a secure pipe from our Threat Intelligence qualification engine that collects, collates, and categorizes threats from around the globe or from partner threat feeds.

CySight Identifies your systems conversing with Bad Actors and allows you to backtrack through historical data to see how long it’s been going on.

Summary

CySight software is capable of the highest level of granularity, scalability, and flexibility available in the network and cloud flow metadata market and supports the broadest range of flow-capable vendors and flow logs.CySight’s Intelligent Visibility, Dropless Collection, automation, and machine intelligence reduce the heavy lifting in alerting, auditing, and discovering your network making performance analytics, anomaly detection, threat intelligence, forensics, compliance, and IP accounting a breeze!

Let us help you today.

The Internet of Things (IoT) – pushing network monitoring to its limits

In the age of the Internet of Things (IoT), billions of connected devices – estimated at 20 billion by the year 2020 – are set to permeate virtually every aspect of daily life and industry. Sensors that track human movement in times of natural disasters, kitchen appliances reminding us to top up on food supplies and even military implementations such as situational awareness in wartime are just a few examples of IoT in action.

Exciting as these times may be, they also highlight a new set of risk factors for Security Specialists who need to answer the call for more vigorous, robust and proactive security solutions. Considerations around security monitoring and management are set to expand far beyond today’s norms as entry points, data volumes and connected hardware multiply at increasing rates in the age of hyper-interconnectedness.

Security monitoring will need to take a more preemptive stance in the age of IoT

With next-generation smart products being used in industries such as utilities, manufacturing, transportation, insurance, and logistics, networks will become exposed to new security vulnerabilities as IoT and enterprises intersect. Smart devices connected to the enterprise can easily act as a bridge to the network, potentially exposing organizations’ information assets. Apply this scenario to a world where virtually every device can communicate with the network from practically anywhere, and the need for more forward-thinking security monitoring becomes apparent. Device-to-device communications will need stronger encryption and ways for network teams to monitor and understand communications, behavior and data patterns. With more “unmanned” computers, appliances and devices coming on-line, understanding new network anomalies will be a challenge.

Networks will become far more heterogeneous

Embedded firmware, operating systems, shorter life-cycles, expanding capabilities and security considerations unique to IoT devices, will make networks far more complex and expansive than what they are today. IoT will hasten more heterogeneous environments, which security teams must be prepared for. The device influx will also drive IPv6 adoption and introduce new protocols. According to Technology.org, “Enterprises will have to look for solutions capable of guarding data gateways in IoT devices using tailored protocol filters and policy capabilities. Besides, regular security updates and patches will become integral to product lifecycle to eliminate every possibility of a compromise.”  This will increase reliance on technologies such as granular Netflow collection that provides forensics and anomaly detection, which offers enterprises, trusted security solutions that are both easily deployed and capable of evolving organically alongside new technologies as they are introduced to environments.

IoT will introduce new types of data into the enterprise

Traffic signal systems, power stations, water sanitation plants and other services vital to society are all incorporating IoT to some degree. Device security in a physical and non-physical context will be important as enterprises need to look at ways of preventing unauthorized entry into the network. Gartner asserts that, “IoT objects possess the ability to change the state of the environment around them, or even their own state (for example, by raising the temperature of a room automatically once a sensor has determined it is too cold, or by adjusting the flow of fluids to a patient in a hospital bed based on information about the patient’s medical records)”.

Considering the risk to human life inherent in hacks into systems of this nature, the level of monitoring and surveillance for compliance is becoming more pertinent each day as these kinds of threats are starting to occur. This will place a high demand on end-point security solutions to be both timely and accurate in its correlation of network data to give Security Teams the needed granularity to provide context around current and evolving risks.

The now infamous Chrysler hack is a primary example of the potentialities of IoT-based breaches and the threats they pose to human safety.

The role of Netflow in forearming the enterprise in the age of IoT

Monitoring systems will be required to identify, categorize and alert Network Operations Centers (NOCs) on a plethora of new datasets, demanding big data capabilities from their network monitoring solutions. NetFlow, if used correctly, can offer an opportunity to provide enterprises with substantial intelligence and an early warning mechanism to assist them in managing the steady move toward IoT and take a forearmed stance in security operations. Netflow’s ability to match to the scale at which the enterprise will grow means NOCs will neutralize the threat of being overwhelmed in a deluge of devices that will generate volumes of data that require around the clock monitoring.

They can achieve deep visibility – central to security in an IoT world – with a NetFlow monitoring, reporting and analysis tool that provides the ability to perform deep security forensics and intelligent baselining, anomaly detection, diagnostics and endpoint threat detection. NetFlow end-point solutions speak to the changing needs of the large environments by reducing Mean Time to Know (MTTK), which in turn shrinks Mean Time to Repair and Resolve (MTTR).

For more information on how CySight is helping organizations build comprehensive network security, performance and management solutions, contact us, or download a free copy of our guide on 8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health.

 8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health