Archives

Category Archive for ‘Cyber Security’

3 Key Differences Between NetFlow and Deep Packet Inspection (DPI) Packet Capture Monitoring

The increasing density, complexity and expanse of modern networking environments have fueled the ongoing debate around which network analysis and monitoring tools serve the needs of the modern engineer best – placing Packet Capture and NetFlow Analysis to perform NDR at center-stage of the conversation.

Granted, when performing analysis of unencrypted traffic both can be extremely valuable tools in ongoing efforts to maintain and optimize complex environments, but as an engineer, I tend to focus on solutions that give me the insights I need without too much cost on my resources, while complementing my team’s ability to maintain and optimize the environments we support.

So with this in mind, let’s take a look at how NetFlow, in the context of the highly-dense networks we find today, delivers three key requirements network teams rely on for reliable end-to-end performance monitoring of their environments.

A NetFlow deployment won’t drain your resources

Packet Capture, also known as Deep Packet Inspection (DPI), once rich in network metrics has finally failed due to encryption and a segment based approach making it expensive to deploy and maintain. It requires a requires sniffing devices and agents throughout the network, which invariably require a huge of maintenance during their lifespan.

In addition, the amount of space required to store and analyze packet data makes it an inefficient an inelegant method of monitoring or forensic analysis. Combine this with the levels of complexity networks can reach today, and overall cost and maintenance associated with DPI can quickly become unfeasible. In the case of NetFlow, its wide vendor support across virtually the entire networking landscape makes almost every switch, router, vmware, GCP cloud, Azure Cloud, AWS cloud, vmWare velocloud or firewall a NetFlow / IPFIX / sflow / ixflow “ready” device. Devices’ built-in readiness to capture and export data-rich metrics makes it easy for engineers to deploy and utilize . Also, thanks to its popularity, CySight’s NetFlow analyzer provides varying feature-sets with enriched vendor specific flow fields are available for security operations center (SOC) network operations center (NOC) teams to gain full advantage of data-rich packet flows.

Striking the balance between detail and context

Considering how network-dependent and widespread applications have become in recent years, NetFlow, IPFIX, sFlow and ixFlow’s  ability to provide WAN-wide metrics in near real-time makes it a  suitable troubleshooting companion for engineers. Add to this enriched context enables a very complete qualification of impact from standard traffic analysis perspective as well as End point Threat views and Machine Learning and AI-Diagnostics.

Latest Flow methods extend the wealth of information as it collects via a template-based collection scheme, it strikes the balance between detail and high-level insight without placing too much demand on networking hardware – which is something that can’t be said for Deep Packet Inspection. Netflow’s constant evolution alongside the networking landscape is seeing it used as a complement to solutions such as Cisco’s NBAR , Packet Brokers such as KeySight, Ixia, Gigamon, nProbe, NetQuest, Niagra Networks, CGS Tower Networks, nProbe and other Packet Broker solutions have recognized that all they need to export flexible enriched flow fields to reveal details at the packet level.

NetFlow places your environment in greater context

Context is a chief area where Granular NetFlow beats out Packet Capture since it allows engineers to quickly locate root causes relating to Cyber Security, Threat Hunting, Root Cause and Performance by providing a more situational view of the environment, its data-flows, bottleneck-prone segments, application behavior, device sessions and so on.

One could argue that Deep Packet Inspection (DPI) is able to provide much of this information too, but as networks today are over 98% encrypted even using certificates won’t give engineers the broader context around the information it presents, thus hamstringing IT teams from detecting anomalies that could be subscribed to a number of factors such as cyber threats, untimely system-wide application or operating system updates or a cross-link backup application pulling loads of data across the WAN during operational hours.

So does NetFlow make Deep Packet Inspection obsolete?

Both Deep Packet Inspection (DPI) and legacy Netflow Analyzers cannot scale in retention so when comparing those two genres of solutions the only win a low end netflow analyzer solution will have against a DPI solution is that DPI is segment based so flow solution is inherently better as its agentless.

However, using NetFlow to identify an attack profile or illicit traffic can only be attained when flow retention is deep (granular) . However, NetFlow strikes that perfect balance between detail and context and gives SOC’s and NOCs intelligent insights that reveals broader factors that can influence your network’s ability to perform.

Gartner’s assertion that a balance of 80% NetFlow monitoring  coupled with 20% Packet Capture as the perfect combination of performance monitoring is false due to encryption’s rise but it is correct to attest to NetFlow’s growing prominence as the monitoring tool of choice and as it and its various iterations such sFlow, IPFIX, ixFlow, Flow logs and  others flow protocols continue to expand the breadth of context it provides network engineers, that margin is set to increase in its favor as time.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Deep Packet Inspection (DPI) becomes Obsolete as Encryption hits Critical Mass

Increasing cyber-crimes, virtualization, regulatory obligations, and a severe shortage of cyber and network security personnel are impacting organizations. Encryption, IT complexity, surface scraping and siloed information hinder security and network visibility.

Encryption has become the new normal, driven by privacy and security concerns. Enterprises are finding it increasingly more difficult to figure out what traffic is bad and what isn’t. Encryption’s exponential adoption has created a significant security visibility challenge globally. Threat actors are now using the lack of decryption to avoid detection.

Encrypted data cannot be analyzed, making network risks harder or impossible to see. More than 95% of internet traffic is now encrypted, denying Deep Packet Inspection (DPI) and other tools that use decrypted packets to inspect traffic and identify risks.

DPI and other techniques that decode packets to detect threats have traditionally been expensive to deploy and maintain and have now entered obsolescence.

As the threat surface grows, organizations have less intelligence to identify and manage threats. Only 1% of network data is preserved by 99% of other kinds of network and cyber technologies causing severe network blindspots, leading security and networking professionals to ignore real dangers.

CySight provides the most precise cyber detection and forensics for on-premises and cloud networks. CySight has 20x more visibility than all of its competitors combined-  substantially improving Security, Application visibility, Zero Trust and Billing. It provides a completely integrated, agentless, and scalable Network, Endpoint, Extended, Availability, Compliance, and Forensics solution- without packet decryption.

CySight uses Flow from most networking equipment. It compares traffic to global threat characteristics to detect infected hosts, Ransomware, DDoS, and other suspicious traffic. CySight’s integrated solution provides network, cloud, IoT, and endpoint security and visibility without packet decryption to detect and mitigate hazards.

Using readily available data sources, CySight records flows at unparalleled depth, in a compact footprint, correlating context and using Machine Learning, Predictive AI and Zero Trust micro segmentation.  CySight identifies and addresses risks, triaging security behaviors and end-point threats with multi-focal telemetry, and contextual information to provide full risk detection and mitigation that other solutions cannot.

Hunt SUNBURST and Trojans with Turbocharged Netflow.

US: December 13 of 2020 was an eye-opener worldwide as Solarwinds software Orion, was hacked using a trojanized update known as SUNBURST backdoor vulnerability. The damage reached thousands of customers, many of which are world leaders in their markets like Intel, Microsoft, Lockheed, Visa, and several USA  governmental agencies. The extent of the damage has not been fully quantified as still more is being learned, nevertheless, the fallout includes real-world harm.

The recent news of the SolarWinds Orion hack is very unfortunate. The hack has left governments and customers who used the SolarWinds Orion tools especially vulnerable and the fallout will still take many months to be recognized. This is a prime example of an issue where a flow metadata tool has the inability to retain sufficient records, causing ineffective intelligence, and that the inability to reveal hidden issues and threats is now clearly impacting organizations’ and government networks and connected assets.

Given what we already know and that more is still being learned, it makes good sense to investigate an alternative solution.

 
 

What Is the SUNBURST Trojan Attack?

SUNBURST, as named by FireEye, is a kind of malware that acts as a trojan horse designed to look like a safe and trustworthy update for Solarwinds customers. To accomplish such infiltration to seemingly well-protected organizations, the hackers had to first infiltrate the Solarwinds infrastructure. Once Solarwinds was successfully hacked, the bad actors could now rely on the trust between Solarwinds and the targeted organizations to carry out the attack. The malware, which looked like a routine update, was in fact creating a back door, compromising the Solarwinds Orion software and any customer who updates their system.

How was SUNBURST detected?

Initially, SUNBURST malware was completely undetected for some time. The attackers started to install a remote access tool malware into the Solarwinds Orion software all the way back in March of 2020, essentially trojaning them. On December 8, 2020, FireEye discovered their own red team tools have been stolen and started to investigate while reporting the event to the NSA. The NSA, also a Solarwinds software user, who is responsible for the USA cybersecurity defense, was unaware of the hack at the time. A few days later, as soon as the information became more public, different cybersecurity firms began to work on reverse engineering and analyzing the hack.

IT’S WHAT WE DON’T SEE THAT POSES THE BIGGEST THREATS AND INVISIBLE DANGERS!

You may be surprised to learn that most well-known tools lack the REAL Visibility that could have prevented attacks on a network and its local and cloud-connected assets. There are some serious shortcomings in the base designs of other flow solutions that result in their inability to scale in retention. This is why smart analysts are realizing that Threat Intelligence and Flow Analytics today is all about having access to long term granular intelligence.

From a forensics perspective, you would appreciate that you can only analyze the data you retain, and with large and growing network and cloud data flows most tools (regardless of their marketing claims) actually cannot scale in retention and choose to drop records in lieu of what they believe is salient data.

Imputed outcome data leads to misleading results and missing data causes high risk and loss!​

A simple way to think about this is if you could imagine trying to collect water from a blasting fire hose into a drinking cup. You just simply cannot collect very much!

Many engineers build scripts to try to attain the missing visibility and do a lot of heavy lifting and then finally come to the realization that no matter how much lifting you do that if the data ain’t there you can’t analyze it.

We found that over 95% of network and cyber visibility tools retain as little as 2% to 5% of all information collected resulting in completely missed analytics, severely misleading analytics, and risk!

How does CySight hunt SUNBURST and other Malware?

It’s often necessary to try and look back with new knowledge that we become aware of to analyze.

For a recently discovered Ransomware or Trojan, such as SUNBURST, it is helpful to see if it’s been active in the past and when it started. Another example is being able to analyze all the related traffic and qualify how long a specific user or process has been exfiltrating an organization’s Intellectual Property and quantify the risk.

SUNBURST enabled the criminals to install a Remote Access Trojan (RAT). RATs, like most Malware, are introduced as part of legitimate-looking files. Once enabled they allow the hacker to view a screen or a terminal session enabling them to look for sensitive data like customer’s credit cards, intellectual property or sensitive company or government secrets.

Even though many antivirus products can identify many RAT signatures, the software and protocols used to view remotely and to exfiltrate files continues to evade many malware detection systems. We must therefore turn to traffic analytics and machine learning to identify traffic behaviors and data movements that are out of the ordinary.

Anonymity by Obscurity

Anonymity_by_obscurity

In order to evade detection, hackers try to hide in plain sight and use protocols that are not usually blocked like DNS, HTTP, and Port 443 to exfiltrate your data.

Sharding_who_what_where_when

Many methods are used to exfiltrate your data. An often-used method is to use p2p technologies to break files into small pieces and slowly send the data unnoticed by other monitoring systems. Due to CySight’s small footprint Dropless Collection you can easily identify sharding and our anomaly detection will identify the outlier traffic and quickly bring it to your attention. When used in conjunction with a packet broker partner such as Keysight, Gigamon, nProbe or other supported packet metadata exporter, CySight provides the extreme application intelligence to help you with complete visibility to control the breach.

Identifying exposure

Onion_routing_Malware_phone_home

In todays connected world every incident has a communications component

You need to keep in mind that all Malware needs to “call home” and today this is going to be through onion routed connections, encrypted VPNs, or via zombies that have been seeded as botnets making it difficult if not impossible to identify the hacking teams involved which may be personally, commercially or politically motivated bad actors.

Multi-focal threat hunting

Threat hunting for SUNBURST or other Malware requires multi-focal analysis at a granular level that simply cannot be attained by sampling methods. It does little good to be alerted to a possible threat without having the detail to understand context and impact. The Hacker who has control of your system will likely install multiple back-doors on various interrelated systems so they can return when you are off guard.

CySight Turbocharges Flow and Cloud analytics for SecOps and NetOps

As with all CySight analytics and detection, you don’t have to do any heavy lifting. We do it all for you!

There is no need to create or maintain special groups with Sunburst or other Malware IP addresses or domains. Every CySight instance is built to keep itself aware of new threats that are automatically downloaded in a secure pipe from our Threat Intelligence qualification engine that collects, collates and categorizes threats from around the globe or from partner threat feeds.

CySight Identifies your systems conversing with Bad Actors and allows you to back track through historical data to see how long it’s been going on.

Distributed_threat_collection

Using Big Data threat feeds collated from multiple sources, thousands of IPs of bad reputation are correlated in real-time with your traffic against threat data that is freshly derived from many enterprises and sources to provide effective visibility of threats and attackers.

  • Cyber feedback

  • Global honeypots

  • Threat feeds

  • Crowd sources

  • Active crawlers

  • External 3rd Party

So how exactly do you go about turbocharging your Flow and Cloud metadata?

CySight software is capable of the highest level of granularity, scalability, and flexibility available in the network and cloud flow metadata market. Lack of granular visibility is one of, if not the main flaw in such products today as they retain as little as 2% to 5% of all information collected, due to inefficient design, severely impacting visibility and risk as a result of missing and misleading analytics, costing organizations greatly.

CySight’s Intelligent Visibility, Dropless Collection, automation, and machine intelligence reduce the heavy lifting in alerting, auditing, and discovering your network making performance analytics, anomaly detection, threat intelligence, forensics, compliance, zero trust and IP accounting and mitigation a breeze!

CySight @ CyberTech

Last week we presented CySight at CyberTech in Tel Aviv, Israel. Cybertech is the most significant conference and exhibition of cyber technologies outside of the United States.
Israel is building a name for itself as the global center of cybersecurity and we have a unique network intelligence solution that fits the Israeli cybersecurity vision. CySight’s unique approach to delivering granular Network Security Forensics, Intelligent Behavior Anomaly Detection and Diagnostics and End-Point Threat Detection was appreciated by the “who’s who” of the Israeli Cyber community that intimately understand the need for granular network intelligence and threat mitigation.
The candidness, openness and warmth of the Israeli community has to be experienced and I cannot begin to express my gratitude for all the intelligencia and warm wishes from those who visited our stand. CySight already enhances Check Point firewalls with CySight providing a joint solution with Check Point providing ultimate network anomaly analytics and forensics (https://www.checkpoint.com/downloads/sb-checkpoint-netflow.pdf). We look forward to CySight becoming a valuable part of the Israeli Cybersecurity space and contributing to its defense.
 

CySight has been building innovative network analytics solutions for the Enterprise and ISP/Telco marketplace since 1995. At the World Congress of IT in 2002 our early concepts won multiple awards for Security and Business Intelligence for our DigiToll software and we continue to deliver and extend our superior network forensics and detection technology. Our objectives are to keep creating tools that build a safer Internet with unique methods to identify and mitigate undesirable traffic.

CySight is a premier flow-analytics solution providing extreme visibility eliminating network blindspots. Anomaly detection and end-point threat intelligence coupled with unique granularity for high-compliance meta-data retention and security forensics help organizations reduce risks associated with inappropriate and malicious traffic and poor performance. Trusted globally by the largest companies for its scalability and flexible analytics. Perpetual diagnostics enable fast mitigation from DDoS, insider threats, botnets, illicit transfers and other bad actors.
Useful links:
8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Cyberwar Defense using Predictive AI Baselining

The world is bracing for a worldwide cyberwar as a result of the current political events. Cyberattacks can be carried out by governments and hackers in an effort to destabilize economies and undermine democracy. Rather than launching cyberattacks, state-funded cyber warfare teams have been studying vulnerabilities for years.

An important transition has occurred, and it is the emergence of bad actors from unfriendly countries that must be taken seriously. The most heinous criminals in this new cyberwarfare campaign are no longer hiding. Experts now believe that a country could conduct more sophisticated cyberattacks on national and commercial networks. Many countries are capable of conducting cyberattacks against other countries, and all parties appear to be prepared for cyber clashes.

So, how would cyberwarfare play out, and how can organizations defend against them?

The first step is to presume that your network has been penetrated or will be compromised soon, and that several attack routes will be employed to disrupt business continuity or vital infrastructure.

Denial-of-service (DoS/DDoS) attacks are capable of spreading widespread panic by overloading network infrastructures and network assets, rendering them inoperable, whether they are servers, communication lines, or other critical technologies in a region.

In 2021, ransomware became the most popular criminal tactic, but country cyber warfare teams in 2022 are now keen to use it for first strike, propaganda and military fundraising. It is only a matter of time before it escalates. Ransomware tactics are used in politically motivated attacks to encrypt computers and render them inoperable. Despite using publicly accessible ransomware code, this is now considered weaponized malware because there is little to no possibility that a key to decode will be released. Ransomware assaults by financially motivated criminals have a different objective, which must be identified before causing financial and social damage, as detailed in a recent RANSOMWARE PAPER

To win the cyberwar on either cyber extortion or cyberwarfare attacks, you must first have complete 360-degree view into your network and deep transparency and intelligent context to detect dangers within your data.

Given what we already know and the fact that more is continually being discovered, it makes sense to evaluate our one-of-a-kind integrated Predictive AI Baselining and Cyber Detection solution.

YOU DON’T KNOW WHAT YOU DON’T KNOW!

AND IT’S WHAT WE DON’T SEE THAT POSES THE BIGGEST THREATS AND INVISIBLE DANGERS!

You may be surprised to learn that most tools lack the REAL Visibility that could have prevented attacks on a network and its local and cloud-connected assets. There are some serious shortcomings in the base designs of other flow solutions that result in their inability to scale in retention.

This is why smart analysts are realizing that Threat Intelligence and Flow Analytics today are all about having access to long-term granular intelligence. From a forensics perspective, you would appreciate that you can only analyze the data you retain, and with large and growing network and cloud data flows most tools (regardless of their marketing claims) actually cannot scale in retention and choose to drop records in lieu of what they believe is salient data.

Imputed outcome data leads to misleading results and missing data causes high risk and loss!

Funnel_Loss_Plus_Text

So how exactly do you go about defending your organizations network and connected assets?

Our approach with CySight focuses on solving Cyber and Network Visibility using granular Collection and Retention with machine learning and A.I.

CySight was designed from the ground up with specialized metadata collection and retention techniques thereby solving the issues of archiving huge flow feeds in the smallest footprint and the highest granularity available in the marketplace.

Network issues are broad and diverse and can occur from many points of entry, both external and internal. The network may be used to download or host illicit materials and leak intellectual property.

Additionally, ransomware and other cyber-attacks continue to impact businesses. So you need both machine learning and End-Point threats to provide a complete view of risk.

The Idea of flow-based analytics is simple yet potentially the most powerful tool to find ransomware and other network and cloud issues. All the footprints of all communications are sent in the flow data and given the right tools you could retain all the evidence of an attack or infiltration or exfiltration.

However, not all flow analytic solutions are created equal and due to the inability to scale in retention the Netflow Ideal becomes unattainable. For a recently discovered Ransomware or Trojan, such as “Wannacry”, it is helpful to see if it’s been active in the past and when it started.

Another important aspect is having the context to be able to analyze all the related traffic to identify concurrent exfiltration of an organization’s Intellectual Property and to quantify and mediate the risk. Threat hunting for RANSOMWARE requires multi-focal analysis at a granular level that simply cannot be attained by sampling methods. It does little good to be alerted to a possible threat without having the detail to understand context and impact. The Hacker who has control of your system will likely install multiple back-doors on various interrelated systems so they can return when you are off guard.

CySight Turbocharges Flow and Cloud analytics for SecOps and NetOps

As with all CySight Predictive AI Baselining analytics and detection, you don’t have to do any heavy lifting. We do it all for you!

There is no need to create or maintain special groups with Ransomware or other endpoints of ill-repute. Every CySight instance is built to keep itself aware of new threats that are automatically downloaded in a secure pipe from our Threat Intelligence qualification engine that collects, collates, and categorizes threats from around the globe or from partner threat feeds.

CySight Identifies your systems conversing with Bad Actors and allows you to backtrack through historical data to see how long it’s been going on.

Summary

IdeaData’s CySight software is capable of the highest level of granularity, scalability, and flexibility available in the network and cloud flow metadata market and supports the broadest range of flow-capable vendors and flow logs.

CySight’s Predictive AI Baselining, Intelligent Visibility, Dropless Collection, automation, and machine intelligence reduce the heavy lifting in alerting, auditing, and discovering your network making threat intelligence, anomaly detection, forensics, compliance, performance analytics and IP accounting a breeze!

Let us help you today. Please schedule a time to meet https://calendly.com/cysight/

Advanced Predictive AI leveraging Granular Flow-Based Network Analytics.

IT’S WHAT YOU DON’T SEE THAT POSES THE BIGGEST THREATS AND INVISIBLE DANGERS.

Existing network management and network security point solutions are facing a major challenge due to the increasing complexity of the IT infrastructure.

The main issue is a lack of visibility into all aspects of physical network and cloud network usage, as well as increasing compliance, service level management, regulatory mandates, a rising level of sophistication in cybercrime, and increasing server virtualization.

With appropriate visibility and context, a variety of network issues can be resolved and handled by understanding the causes of network slowdowns and outages, detecting cyber-attacks and risky traffic, determining the origin and nature, and assessing the impact.

It’s clear that in today’s work-at-home, cyberwar, ransomware world, having adequate network visibility in an organization is critical, but defining how much visibility is considered “right” visibility is becoming more difficult, and more often than not even well-seasoned professionals make incorrect assumptions about the visibility they think they have. These misperceptions and malformed assumptions are much more common than you would expect and you would be forgiven for thinking you have everything under control.

When it comes to resolving IT incidents and security risks and assessing the business impact, every minute counts. The primary goal of Predictive AI Baselining coupled with deep contextual Network Forensics is to improve the visibility of Network Traffic by removing network blindspots and identifying the sources and causes of high-impact traffic.

Inadequate solutions (even the most well-known) mislead you into a false level of comfort but as they tend to only retain the top 2% or 5% of network communications frequently cause false positives and red herrings. Cyber threats can come from a variety of sources. These could be the result of new types of crawlers or botnets, infiltration and ultimately exfiltration that can destroy a business.

Networks are becoming more complex. Because of negligence, failing to update and patch security holes, many inadvertent threats can open the door to malicious outsiders. Your network could be used to download or host illegal materials, or it could be used entirely or partially to launch an attack. Ransomware attacks are still on the rise, and new ways to infiltrate organizations are being discovered. Denial of Service (DoS) and distributed denial of service (DDoS) attacks continue unabated, posing a significant risk to your organization. Insider threats can also occur as a result of internal hacking or a breach of trust, and your intellectual property may be slowly leaked as a result of negligence, hacking, or being compromised by disgruntled employees.

Whether you are buying a phone a laptop or a cyber security visibility solution the same rule applies and that is that marketers are out to get your hard-earned cash by flooding you with specifications and solutions whose abilities are radically overstated. Machine Learning  (ML) and Artificial Intelligence (AI) are two of the most recent to join the acronyms. The only thing you can know for sure dear cyber and network professional reader is that they hold a lot of promise.

One thing I can tell you from many years of experience in building flow analytics, threat intelligence, and cyber security detection solutions is that without adequate data your results become skewed and misleading. Machine Learning and AI enable high-speed detection and mitigation but without Granular Analytics (aka Big Data) you won’t know what you don’t know and neither will your AI!

In our current Covid world we have all come to appreciate, in some way, the importance of big data, ML and AI that if properly applied, just how quickly it can help mitigate a global health crisis. We only have to look back a few years when drug companies didn’t have access to granular data the severe impact that poor data had on people’s lives. Thalidomide is one example. In the same way, when cyber and network visibility solutions are only surface scraping data information will be incorrect and misleading and could seriously impact your network and the livelihoods of the people you work for and together with.

The Red Pill or The Blue Pill?

The concept of flow or packet-based analytics is straightforward, yet they have the potential to be the most powerful tools for detecting ransomware and other network and cloud-related concerns. All communications leave a trail in the flow data, and with the correct tools, you can recover all evidence of an assault, penetration, or exfiltration.

Not all analytic systems are made equal, and the flow/packet ideals become unattainable for other tools because of their difficulty to scale in retention. Even well-known tools have serious flaws and are limited in their ability to retain complete records, which is often overlooked. They don’t effectively provide the visibility of the blindspots they claimed.

As already pointed out, over 95% of network and deep packet inspection (DPI) solutions struggle to retain even 2% to 5% of all data captured in medium to large networks, resulting in completely missing diagnoses and delivering significantly misleading analytics that leads to misdiagnosis and risk!

It is critical to have the context and visibility necessary to assess all relevant traffic to discover concurrent intellectual property exfiltration and to quantify and mitigate the risk. It’s essential to determine whether a newly found Trojan or Ransomware has been active in the past and when it entered and what systems are still at risk.

Threat hunting demands multi-focal analysis at a granular level that sampling, and surface flow analytics methods just cannot provide. It is ineffective to be alerted to a potential threat without the context and consequence. The Hacker who has gained control of your system is likely to install many backdoors on various interconnected systems to re-enter when you are unaware. As Ransomware progresses it will continue to exploit weaknesses in Infrastructures.

Often those most vulnerable are those who believe they have the visibility to detect.

Network Matrix of Knowledge

Post-mortem analysis of incidents is required, as is the ability to analyze historical behaviors, investigate intrusion scenarios and potential data breaches, qualify internal threats from employee misuse, and quantify external threats from bad actors.

The ability to perform network forensics at a granular level enables an organization to discover issues and high-risk communications happening in real-time, or those that occur over a prolonged period such as data leaks. While standard security devices such as firewalls, intrusion detection systems, packet brokers or packet recorders may already be in place, they lack the ability to record and report on every network traffic transfer over the long term.

According to industry analysts, enterprise IT security necessitates a shift away from prevention-centric security strategies and toward information and end-user-centric security strategies focused on an infrastructure’s endpoints, as advanced targeted attacks are poised to render prevention-centric security strategies obsolete and today with Cyberwar a reality that will impact business and government alike.

As every incident response action in today’s connected world includes a communications component, using an integrated cyber and network intelligence approach provides a superior and cost-effective way to significantly reduce the Mean Time To Know (MTTK) for a wide range of network issues or risky traffic, reducing wasted effort and associated direct and indirect costs.

Understanding The shift towards Flow-Based Metadata

for Network and Cloud Cyber-Intelligence

  • The IT infrastructure is continually growing in complexity.
  • Deploying packet capture across an organization is costly and prohibitive especially when distributed or per segment.
  • “Blocking & tackling” (Prevention) has become the least effective measure.
  • Advanced targeted attacks are rendering prevention‑centric security strategies obsolete.
  • There is a Trend towards information and end‑user centric security strategies focused on an infrastructure’s end‑points.
  • Without making use of collective sharing of threat and attacker intelligence you will not be able to defend your business.

So what now?

If prevention isn’t working, what can IT still do about it?

  • In most cases, information must become the focal point for our information security strategies. IT can no longer control invasive controls on user’s devices or the services they utilize.

Is there a way for organizations to gain a clear picture of what transpired after a security breach?

  • Detailed monitoring and recording of interactions with content and systems. Predictive AI Baselining, Granular Forensics, Anomaly Detection and Threat Intelligence ability is needed to quickly identify what other users were targeted, what systems were potentially compromised and what information was exfiltrated.

How do you identify attacks without signature-based mechanisms?

  • Pervasive monitoring enables you to identify meaningful deviations from normal behavior to infer malicious intent. Nefarious traffic can be identified by correlating real-time threat feeds with current flows. Machine learning can be used to discover outliers and repeat offenders.

Summing up

Network security and network monitoring have gone a long way and jumped through all kinds of hoops to reach the point they have today. Unfortunately, through the years, cyber marketing has surpassed cyber solutions and we now have misconceptions that can do considerable damage to an organization.

The biggest threat is always the one you cannot see and hits you the hardest once it has established itself slowly and comfortably in a network undetected. Complete visibility can only be accessed through 100% collection and retention of all data traversing a network, otherwise even a single blindspot will affect the entire organization as if it were never protected to begin with. Just like a single weak link in a chain, cyber criminals will find the perfect access point for penetration.

Inadequate solutions that only retain the top 2% or 5% of network communications frequently cause false positives and red herrings. You need to have 100% access to your comms data for Full Visibility, but how can you be sure that you will?

You need free access to Full Visibility to unlock all your data and an Intelligent Predictive AI technology that can autonomously and quickly identify what’s not normal at both the macro and micro level of your network, cloud, servers, iot devices and other network connected assets.

Get complete visibility wiith CySight now –>>>

5 Ways Flow Based Network Monitoring Solutions Need to Scale

Partial Truth Only Results in Assumptions

A common gripe for Network Engineers is that their current network monitoring solution doesn’t provide the depth of information needed to quickly ascertain the true cause of a network issue. Imagine reading a book that is missing 4 out of every 6 words, understanding the context will be hopeless and the book has near to no value. Many already have over-complicated their monitoring systems and methodologies by continuously extending their capabilities with a plethora of add-ons or relying on disparate systems that often don’t interface very well with each other. There is also an often-mistaken belief that the network monitoring solutions that they have invested in will suddenly give them the depth they need to have the required visibility to manage complex networks.

A best-value approach to NDR, NTA and general network monitoring is to use a flow-based analytics methodology such as NetFlow, sFlow or IPFIX.

The Misconception & What Really Matters

In this market, it’s common for the industry to express a flow software’s scaling capability in flows-per-second. Using Flows-per-second as a guide to scalability is misleading as it is often used to hide a flow collector’s inability to archive flow data by overstating its collection capability and enables them to present a larger number considering they use seconds instead of minutes. It’s important to look not only at flows-per-second but to understand the picture created once all the elements are used together. Much like a painting of a detailed landscape, the finer the brush and the more colors used will ultimately provide the complete and truly detailed picture of what was being looked at when drawing the landscape.

Granularity is the prime factor to start focusing on, specifically referring to granularity retained per minute (flow retention rate). Naturally, speed impediment is a significant and critical factor to be aware of as well. The speed and flexibility of alerting, reporting, forensic depth, and diagnostics all play a strategic role but will be hampered when confronted with scalability limitations. Observing the behavior when impacted by high-flow-variance or sudden-bursts and considering the number of devices and interfaces can enable you to appreciate the absolute significance of scalability in producing actionable insights and analytics.  Not to mention the ability to retain short-term and historical collections, which provide vital trackback information, would be nonexistent. To provide the necessary visibility to accomplish the ever-growing number of tasks analysts and engineers must deal with daily along with resolving issues to completion, NDR, NTA and general Network Monitoring System (NMS) must have the ability to scale in all its levels of consumption and retention.

How Should Monitoring Solutions Scale?

Flow-Based Network Detection and Response (NDR) / Network Traffic Analysis (NTA) software needs to scale in its collection of data in five ways:

Ingestion Capability – Also referred to as Collection, means the number of flows that can be consumed by a single collector. This is a feat that most monitoring solutions are able to accomplish, unfortunately, it is also the one they pride themselves on. It is an important ability but is only the first step of several crucial capabilities that will determine the quality of insights and intelligence of a monitoring system. Ingestion is only the ability to take in data, it does not mean “retention”, and therefore could do very little on its own.

Digestion Capability – Also referred to as Retention, means the number of flow records that can be retained by a single collector. The most overlooked and difficult step in the network monitoring world. Digestion / Flow retention rates are particularly critical to quantify as they dictate the level of granularity that allows a flow-based NMS to deliver the visibility required to achieve quality Predictive AI Baselining, Anomaly Detection, Network Forensics, Root Cause Analysis, Billing Substantiation, Peering Analysis, and Data Retention compliance. Without retaining data, you cannot inspect it beyond the surface level, losing the value of network or cloud visibility.

Multitasking Processes– Pertains to the multitasking strength of a solution and its ability to scale and spread a load of collection processes across multiple CPUs on a single server.  This seems like an obvious approach to the collection but many systems have taken a linear serial approach to handle and ingest multiple streams of flow data that don’t allow their technologies to scale when new flow generating devices, interfaces, or endpoints are added forcing you to deploy multiple instances of a solution which becomes ineffective and expensive.

Clustered Collection – Refers to the ability of a flow-based solution to run a single data warehouse that takes its input from a cluster of collectors as a single unit as a means to load balance. In a large environment, you mostly have very large equipment that sends massive amounts of data to collectors. In order to handle all that data, you must distribute the load amongst a number of collectors in a cluster to multiple machines that make sense of it instead of a single machine that will be overloaded. This ability enables organizations to scale up in data use instead of dropping it as they attempt to collect it.

Hierarchical Correlation – The purpose of Hierarchical correlation is to take information from multiple databases and aggregate them into a single Super SIEM. With the need to consume and retain huge amounts of data, comes the need to manage and oversee that data in an intelligent way. Hierarchical correlation is designed to enable parallel analytics across distributed data warehouses to aggregate their results. In the field of network monitoring, getting overwhelmed with data to the point where you cannot find what you need is a as useful as being given all the books in the world and asked a single question that is answered in only one.

Network traffic visibility is considerably improved by reducing network blindspots and providing qualified sources and reasons of communications that impair business continuity.The capacity to capture flow at a finer level allows for new Predictive AI Baselining and Machine Learning application analysis and risk mitigation.

There are so many critical abilities that a network monitoring solution must enable its user, all are affected by whether or not the solution can scale.

Visibility is a range and not binary, you do not have or don’t have visibility, its whether you have enough to achieve your goals and keep your organization productive and safe.

How to Use a Network Behavior Analysis Tool to Your Advantage

How to Use a Network Behavior Analysis Tool to Your Advantage

Cybersecurity threats can come in many forms. They can easily slip through your network’s defenses if you let your guard down, even for a second. Protect your business by leveraging network behavior analysis (NBA). Implementing behavioral analysis tools helps organizations detect and stop suspicious activities within their networks before they happen and limit the damage if they do happen.

According to Accenture, improving network security is the top priority for most companies this 2021. In fact, the majority of them have increased their spending on network security by more than 25% in the past months. 

With that, here are some ways to use network behavior anomaly detection tools to your advantage.

1.     Leverage artificial intelligence

Nowadays, you can easily leverage artificial intelligence (AI) and machine learning (ML) in your network monitoring. In fact, various software systems utilize  AI diagnostics to enhance the detection of any anomalies within your network. Through its dynamic machine learning, it can quickly learn how to differentiate between normal and suspicious activities.

AI-powered NBA software can continuously adapt to new threats and discover outliers without much interference from you. This way, it can provide early warning on potential cyberattacks before they can get serious. This can include DDoS, Advanced Persistent Threats, and Anomalous traffic.

Hence, you should consider having AI diagnostics as one of your network behavior analysis magic quadrants.

2.           Take advantage of its automation

One of the biggest benefits of a network anomaly detection program is helping you save time and labor in detecting and resolving network issues. It is constantly watching your network, collecting data, and analyzing activities within it. It will then notify you and your network administrators of any threats or anomalies within your network.

Moreover, it can automatically mitigate some security threats from rogue applications to prevent sudden downtimes. It can also eliminate blind spots within your network security, fortifying your defenses and visibility. As a result, you or your administrators can qualify and detect network traffic passively.

3.           Utilize NBA data and analytics

As more businesses become data-driven, big data gains momentum. It can aid your marketing teams in designing better campaigns or your sales team in increasing your business’ revenues. And through network behavior analysis, you can deep-mine large volumes of data from day-to-day operations.

For security engineers, big data analytics has become an effective defense against network attacks and vulnerabilities. It can give them deeper visibility into increasingly complex and larger network systems. 

Today’s advanced analytics platforms are designed to handle and process larger volumes of data. Furthermore, these platforms can learn and evolve from such data, resulting in stronger network behavior analytics and local threat detection.

4.           Optimize network anomaly detection

A common issue with network monitoring solutions is their tendency to overburden network and security managers with false-positive readings. This is due to the lack of in-depth information to confirm the actual cause of a network issue. Hence, it is important to consistently optimize your network behavior analysis tool.

One way to do this is to use a flow-based analytics methodology for your network monitoring. You can do so with software like CySight, which uses artificial intelligence to analyze, segment, and learn from granular telemetry from your network infrastructure flows in real-time. It also enables you to configure and fine-tune your network behavior analysis for more accurate and in-depth monitoring.

5.           Integrate with other security solutions

Enhance your experience with your network behavior analytics tool by integrating it with your existing security solutions, such as prevention technology (IPS) systems, firewalls, and more. 

Through integrations, you can cross-analyze data between security tools for better visibility and more in-depth insights on your network safety. Having several security systems working together at once means one can detect or mitigate certain behaviors that are undetectable for the other. This also ensures you cover all the bases and leave no room for vulnerabilities in your network.

Improving network security

As your business strives towards total digital transformation, you need to start investing in your network security. Threats can come in many forms. And once it slips past your guard, it might just be too late.

Network behavior analysis can help fortify your network security. It constantly monitors your network and traffic and notifies you of any suspicious activities or changes. This way, you can immediately mitigate any potential issues before they can get out of hand. Check out CySight to know more about the benefits of network behavior analysis.

But, of course, a tool can only be as good as the people using it. Hence, you must make sure that you hire the right people for your network security team. Consider recruiting someone with an online software engineering masters to help you strengthen your network.


Ref: Accenture Report

Scalable NetFlow – 3 Key Questions to Ask Your NetFlow Vendor

Why is flows per second a flawed way to measure a netflow collector’s capability?

Flows-per-second is often considered the primary yardstick to measure the capability of a netflow analyzers flow capture (aka collection) rate.

This seems simple on its face. The more flows-per-second that a flow collector can consume, the more visibility it provides, right? Well, yes and no.

The Basics

NetFlow was originally conceived as a means to provide network professionals the data to make sense of the traffic on their network without having to resort to expensive per segment based packet sniffing tools.

A flow record contains at minimum the basic information pertaining to a transfer of data through a router, switch, firewall, packet tap or other network gateway. A typical flow record will contain at minimum: Source IP, Destination IP, Source Port, Destination Port, Protocol, Tos, Ingress Interface and Egress Interface. Flow records are exported to a flow collector where they are ingested and information orientated to the engineers purposes are displayed.

Measurement

Measurement has always been how the IT industry expresses power and competency. However, a formula used to reflect power and ability changes when a technology design undergoes a paradigm shift.

For example, when expressing how fast a computer is we used to measure the CPU clock speed. We believed that the higher the clock speed the more powerful the computer. However, when multi-core chips were introduced the CPU power and speed dropped but the CPU in fact became more powerful. The primary clock speed measurement indicator became secondary to the ability to multi-thread.

The flows-per-second yardstick is misleading as it incorrectly reflects the actual power and capability of a flow collector to capture and process flow data and it has become prone to marketing exaggeration.

Flow Capture Rate

Flow capture rate ability is difficult to measure and to quantify a products scalability. There are various factors that can dramatically impact the ability to collect flows and to retain sufficient flows to perform higher-end diagnostics.

Its important to look not just at flows-per-second but at the granularity retained per minute (flow retention rate), the speed and flexibility of alerting, reporting, forensic depth and diagnostics and the scalability when impacted by high-flow-variance, sudden-bursts, number of devices and interfaces, the speed of reporting over time, the ability to retain short-term and historical collections and the confluence of these factors as it pertains to scalability of the software as a whole.

Scalable NetFlow and flow retention rates are particularly critical to determine as appropriate granularity is needed to deliver the visibility required to perform Anomaly Detection, Network Forensics, Root Cause Analysis, Billing substantiation, Peering Analysis and Data retention compliance.

The higher the flows-per-second and the flow-variance the more challenging it becomes to achieve a high flow-retention-rate to archive and retain flow records in a data warehouse.

A vendors capability statement might reflect a high flows-per-second consumption ability but many flow software tools have retention rate limitations by design.

It can mean that irrespective of achieving a high flow collection rate the netflow analyzer might only be capable of physically archiving 500 flows per minute. Furthermore, these flows are usually the result of sorting the flow data by top bytes to identify Top 10bandwidth abusers. Netflow products of this kind can be easily identified because they often tend to offer benefits orientated primarily to identifying bandwidth abuse or network performance monitoring.

Identifying bandwidth abusers is of course a very important benefit of a netflow analyzer. However, it has a marginal benefit today where a large amount of the abuse and risk is caused by many small flows.

These small flows usually fall beneath the radar screen of many netflow analysis products.  Many abuses like DDoS, p2p, botnets and hacker or insider data exfiltration continue to occur and can at minimum impact the networking equipment and user experience. Lack of ability to quantify and understand small flows creates great risk leaving organizations exposed.

Scalability

This inability to scale in short-term or historical analysis severely impacts a flow monitoring products ability to collect and retain critical information required in todays world where copious data has created severe network blind spots.

To qualify if a tool is really suitable for the purpose, you need to know more about the flows-per-second collection formula being provided by the vendor and some deeper investigation should be carried out to qualify the claims.

 

With this in mind here are 3 key questions to ask your NetFlow vendor to understand what their collection scalability claims really mean:

  1. How many flows can be collected per second?

  • Qualify if the flows per second rate provided is a burst rate or a sustained rate.
  • Ask how the collection and retention rates might be affected if the flows have high-flow variance (e.g. a DDoS attack).
  • How is the collection, archiving and reporting impacted when flow variance is increased by adding many devices and interfaces and distinct IPv4/IPv6 conversations and test what degradation in speed can you expect after it has been recording for some time.
  • Ask how the collection and retention rates might change if adding additional fields or measurements to the flow template (e.g. MPLS, MAC Address, URL, Latency)
  • How many flow records can be retained per minute?

  • Ask how the actual number of records inserted into the data warehouse per minute can be verified for short-term and historical collection.
  • Ask what happens to the flows that were not retained.
  • Ask what the flow retention logic is. (e.g. Top Bytes, First N)
  • What information granularity is retained for both short-term and historically?
    • Does the datas time granularity degrade as the data ages e.g. 1 day data retained per minute, 2 days retained per hour 5 days retained per quarter
    • Can you control the granularity and if so for how long?

 

Remember – Rate of collection does not translate to information retention.

Do you know whats really stored in the software’s database? After all you can only analyze what has been retained (either in memory or on disk) and it is that information retention granularity that provides a flow products benefits.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Big Data – A Global Approach To Local Threat Detection

From helping prevent loss of life in the event of a natural disaster, to aiding marketing teams in designing more targeted strategies to reach new customers, big data seems to be the chief talking point amongst a broad and diverse circle of professionals.

For Security Engineers, big data analytcs is proving to be an effective defense against evolving network intrusions thanks to the delivery of near real-time insights based on high volumes of diverse network data. This is largely thanks to technological advances that have resulted in the capacity to transmit, capture, store and analyze swathes of data through high-powered and relatively low-cost computing systems.

In this blog, we’ll take a look at how big data is bringing deeper visibility to security teams as environments increase in complexity and our reliance on pervading network systems intensifies.

Big data analysis is providing answers to the data deluge dilemma

Large environments generate gigabytes of raw user, application and device metrics by the minute, leaving security teams stranded in a deluge of data. Placing them further on the back foot is the need to sift through this data, which involves considerable resources that at best only provide a retrospective view on security breaches.

Big data offers a solution to the issue of “too much data too fast” through the rapid analysis of swathes of disparate metrics through advanced and evolving analytical platforms. The result is actionable security intelligence, based on comprehensive datasets, presented in an easy-to-consume format that not only provides historic views of network events, but enables security teams to better anticipate threats as they evolve.

In addition, big data’s ability to facilitate more accurate predictions on future events is a strong motivating factor for the adoption of the discipline within the context of information security.

Leveraging big data to build the secure networks of tomorrow

As new technologies arrive on the scene, they introduce businesses to new opportunities – and vulnerabilities. However, the application of Predictive AI Baselining analytics to network security in the context of the evolving network is helping to build the secure, stable and predictable networks of tomorrow. Detecting modern, more advanced threats requires big data capabilities from incumbent intrusion prevention and detection (IDS\IPS) solutions to distinguish normal traffic from potential threats.

By contextualizing diverse sets of data, Security Engineers can more effectively detect stealthily designed threats that traditional monitoring methodologies often fail to pick up. For example, Advanced Persistent Threats (APT) are notorious for their ability to go undetected by masking themselves as day-to-day network traffic. These low visibility attacks can occur over long periods of time and on separate devices, making them difficult to detect since no discernible patterns arise from their activities through the lens of traditional monitoring systems.

Big data Predictive AI Baselining analytics lifts the veil on threats that operate under the radar of traditional signature and log-based security solutions by contextualizing traffic and giving NOCs a deeper understanding of the data that traverses the wire.

Gartner states that, “Big data Predictive AI Baselining analytics enables enterprises to combine and correlate external and internal information to see a bigger picture of threats against their enterprises.”  It also eliminates the siloed approach to security monitoring by converging network traffic and organizing it in a central data repository for analysis; resulting in much needed granularity for effective intrusion detection, prevention and security forensics.

In addition, Predictive AI Baselining analytics eliminates barriers to internal collaborations between Network, Security and Performance Engineers by further contextualizing network data that traditionally acted as separate pieces of a very large puzzle.

So is big data Predictive AI Baselining analytics the future of network monitoring?

In a way, NOC teams have been using big data long before the discipline went mainstream. Large networks have always produced high volumes of data at high speeds – only now, that influx has intensified exponentially.

Thankfully, with the rapid evolution of computing power at relatively low cost, the possibilities of what our data can tell us about our networks are becoming more apparent.

The timing couldn’t have been more appropriate since traditional perimeter-based IDS\IPS no longer meet the demands of modern networks that span vast geographical areas with multiple entry points.

In the age of cloud, mobility, ubiquitous Internet and the ever-expanding enterprise environment, big data capabilities will and should become an intrinsic part of virtually every security apparatus.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

How to counter-punch botnets, viruses, ToR & more with Netflow [Pt 1]

You can’t secure what you can’t see and you don’t know what you don’t know.

Many network and security professionals assume that they can simply analyze data captured using their standard security devices like firewalls and intrusion detection systems, however they quickly discover their limitations as these devices are not designed for and cannot record and report on every transaction due to lack of granularity, scalability and historic data retention. Network devices like routers, switches, Wi-Fi or VMware servers also typically lack any sophisticated anti-virus software.

Presenting information in a manner that quickly enables security teams to act with simple views with deep contextual data supporting the summaries is the mark of a well constructed traffic analyzer ensuring teams are not bogged down by the detail unless required and even then allowing elegant means to extract forensics with simple but powerful visuals to enable quick contextual grasp and impact of a security event.

Using NetFlow Correlation to Detect intrusions  

Host Reputation is one of the best detection methods that can be used against Advanced Persistent Threats. There are many data sources to choose from and some are more comprehensive than others.

Today these blacklists are mostly IPv4 and Domain orientated designed to be used primarily by firewalls, network intrusion systems and antivirus software.

They can also be used in NetFlow systems very successfully as long as the selected flow technology can scale to support the thousands of known compromised end-points, the ability to frequently update the threat data and the ability to record the full detail of every compromised flow and subsequent conversations that communicate with the compromised systems to discover other related breaches that may have occurred or are occurring.

According to Mike Schiffman at Cisco,

“If a given IP address is known to be that of a spammer or a part of a botnet army it can be flagged in one of the ill repute databases … Since these databases are all keyed on IP address, NetFlow data can be correlated against them and subsequent malicious traffic patterns can be observed, blocked, or flagged for further action. This is NetFlow Correlation.“

The kind of data can we expect to find in the reputation databases are IP addresses that have known to be acting in some malicious or negative manner such as being seen by multiple global honeypots. Some have been identified to be part of a well-known botnet such as Palevo or Zeus whilst other IP’s are known to have been distributing Malware or Trojans. Many kinds of lists can be useful to correlate such as known ToR end points or Relays that have become particularly risky of late being a common means to introduce RansomWare and should certainly not be seen conversing to any host within a corporate, government or other sensitive environment.

Using a tool like CySight’s advanced End-Point Threat Detection allows NetFlow data to be correlated against hundreds of thousands of IP addresses of questionable reputation including ToR exits and relays in real-time with comprehensive historical forensics that can be deployed in a massively parallel architecture.

As a trusted source of deep network insights built on big data analysis capabilities, Netflow provides NOCs with an end-to-end security and performance monitoring and management solution. For more information on Netflow as a performance and security solution for large-scale environments, download our free Guide to Understanding Netflow.

Cutting-edge and innovative technologies like CySight delivers the deep end-to-end network visibility and security context required assisting in speedily impeding harmful attacks.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

Microsoft Nobelium Hack

Solarwinds Hackers Strike Again

Another painful round of cyber-attacks carried out by what Microsoft discovered to be a Russian state-sponsored hacking group called Nobelium, this time attacking Microsoft support agent’s computer, exposing customer’s subscription information. 

The activity tracked by Microsoft led to Nobelium, the same group that executed the solarwinds orion hack last year December 2020. The attack was first discovered when an “information-stealing malware” on one of Microsoft’s customer support agent’s machine was detected by Microsoft themselves. Infiltration occurred using password spraying and brute force attacks, attempting to gain access to the Microsoft accounts.

Microsoft said Nobelium had targeted over 150 organizations worldwide in the last week, including government agencies, think tanks, consultants, and nongovernmental organizations, reaching over 3000 email accounts mostly in the USA but also present in at least 24 other countries. This event is said to be an “active incident”, meaning this attack is very much Live and more has yet to be discovered. Microsoft is attempting to notify all who are affected.

The attack carried out was done through an email marketing account belonging to the U.S Agency for International Development. Recipients of the email received a phishing email that looked authentic but contained a malicious file inserted into a link. Once the file was downloaded, the machine is compromised and a back door is created, enabling the bad actor to steal data along with infecting other machines on the network.

In April this year, the Biden administration pointed the finger at the Russian Foreign Intelligence Service (SVR) for being responsible for the solarwinds attack, exposing the Nobelium group. It appears that this exposure led the group to drop their stealth approach they have been using for months and on May 25 they ran a “spear phishing” campaign, causing a zero-day vulnerability.

Nobelium Phishing Attack

Staying in Control of your Network

IdeaData’s Marketing Manager, Tomare Curran, stated on the matter, “These kinds of threats can hide and go unnoticed for years until the botnet master decides to activate the malware. Therefore, it’s imperative to maintain flow metadata records of every transaction so that when a threat finally comes to light you can set Netflow Auditor’s HindSight Threat Analyzer to search back and help you find out if or when you were compromised and what else could have been impacted.”

NetFlow Auditor constantly keeps its eyes on your Network and provides total visibility to quickly identify and alert on who is doing What, Where, When, with Whom and for How Long right now or months ago. It baselines your network to discover unusual network behaviors and using machine learning and A.I. diagnostics will provide early warning on anomalous communications.

Cyber security experts at IdeaData do not believe the group will stop their operations due to being exposed. IdeaData is offering Netflow Auditor’s Integrated Cyber Threat Intelligence solution free for 60 days to allow companies to help cleanse their network from newly identified threats.

Have any questions?

Contact us at:  tomare.curran@netflowauditor.com

How to Improve Cyber Security with Advanced Netflow Network Forensics

Most organizations today deploy network security tools that are built to perform limited prevention – traditionally “blocking and tackling” at the edge of a network using a firewall or by installing security software on every system.

This is only one third of a security solution, and has become the least effective measure.

The growing complexity of the IT infrastructure is the major challenge faced by existing network security tools. The major forces impacting current network security tools are the rising level of sophistication of cybercrimes, growing compliance and regulatory mandates, expanding virtualization of servers and the constant need for visibility compounded by ever-increasing data volumes. Larger networks involve enormous amounts of data, into which the incident teams must have a high degree of visibility for analysis and reporting purposes.

An organization’s network and security teams are faced with increasing complexities, including network convergence, increased data and flow volumes, intensifying security threats, government compliance issues, rising costs and network performance demands.

With network visibility and traceability also top priorities, companies must look to security network forensics to gain insight and uncover issues. The speed with which an organization can identify, diagnose, analyze, and respond to an incident will limit the damage and lower the cost of recovery.

Analysts are better positioned to mitigate risk to the network and its data through security focused network forensics applied at the granular level. Only with sufficient granularity and historic visibility and tools that are able to machine learn from the network Big Data can the risk of an anomaly be properly diagnosed and mitigated.

Doing so helps staff identify breaches that occur in real-time, as well as Insider threats and data leaks that take place over a prolonged period. Insider threats are one of the most difficult to detect and are missed by most security tools.

Many network and security professionals assume that they can simply analyze data captured using their standard security devices like firewalls and intrusion detection systems, however they quickly discover limitations as these devices are not designed for and cannot record and report on every transaction due to lack of deep visibility, scalability and historic data retention making old fashioned network forensic reporting expensive and impractical.

NetFlow is an analytics software technology that enables IT departments to accurately audit network data and host-level activity. It enhances network security and performance making it easy to identify suspicious user behaviors to protect your entire infrastructure.

A well-designed NetFlow forensic tool should include powerful features that can allow for:

  • Micro-level data recording to assist in identification of real-time breaches and data leaks;
  • Event notifications and alerts for network administrators when irregular traffic movements are detected;
  • Tools that highlight trends and baselines, so IT staff can provision services accordingly;
  • Tools that learn normal behavior, so Network Security staff can quickly detect and mitigate threats;
  • Capture highly granular traffic over time to enable deep visibility across the entire network infrastructure;
  • 24-7 automation, flexible reporting processes to deliver usable business intelligence and security forensics specifically for those analytics that can take a long time to produce.

Forensic analysts require both high-level and detailed visibility through aggregating, division and drilldown algorithms such as:

  • Deviation / Outlier analysis
  • Bi-directional analysis
  • Cross section analysis
  • Top X/Y analysis
  • Dissemination analysis
  • Custom Group analysis
  • Baselining analysis
  • Percentile analysis
  • QoS analysis
  • Packet Size analysis
  • Count analysis
  • Latency and RTT analysis

Further when integrated with a visual analytics process it will enable additional insights to the forensic professional when analyzing subsets of the flow data surrounding an event.

In some ways it needs to act as a log analyzer, security information and event management (SIEM) and a network behavior anomaly and threat detector all rolled into one.

The ultimate goal is to deploy a multi-faceted flow-analytics solution that can compliment your business by providing extreme visibility and eliminating network blindspots, both in your physical infrastructure and in the cloud, automatically detecting and diagnosing your entire network for anomalous traffic and improving your mean time to detect and repair.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

NetFlow for Advanced Threat Detection

These networks are vital assets to the business and require absolute protection against unauthorized access, malicious programs, and degradation of performance of the network. It is no longer enough to only use Anti-Virus applications.

By the time malware is detected and those signatures added to the antiviral definitions, access is obtained and havoc wreaked or the malware is buried itself inside the network and is obtaining data and passwords for later exploitation.

An article by Drew Robb in eSecurity Planet on September 3, 2015 (https://www.esecurityplanet.com/network-security/advanced-threat-detection-buying-guide-1.html) cited the Verizon 2015 Data Breach Investigations Report where 70 respondents reported over 80,000 security incidents which led to more than 2000 serious breaches in one year.

The report noted that phishing is commonly used to gain access and the malware  then accumulates passwords and account numbers and learns the security defenses before launching an attack.  A telling remark was made, “It is abundantly clear that traditional security solutions are increasingly ineffectual and that vendor assurances are often empty promises,” said Charles King, an analyst at Pund-IT. “Passive security practices like setting and maintaining defensive security perimeters simply don’t work against highly aggressive and adaptable threat sources, including criminal organizations and rogue states.”

So what can businesses do to protect themselves? How can they be proactive in addition to the passive perimeter defenses?

The very first line of defense is better education of users. In one test, an e-mail message was sent to the users, purportedly from the IT department, asking for their passwords in order to “upgrade security.” While 52 people asked the IT department if this was a real request, 110 mailed their passwords right back. In their attempts to be productive, over half of the recipients of phishing e-mails responded within an hour!

Another method of advanced threat protection is NetFlow Monitoring.

IT department and Managed service providers (MSP’s), can use monitoring capabilities to detect, prevent, and report adverse effects on the network.

Traffic monitoring, for example, watches the flow of information and data traversing critical nodes and network links. Without using intrusive probes, this information helps decipher how applications are using the network and which ones are becoming bandwidth hogs. These are then investigated further to determine what is causing the problem and how best to manage the issue. Just adding more bandwidth is not the answer!

IT departments review this data to investigate which personnel are the power users of which applications, when the peak traffic times are and why, and similar information in addition to flagging and diving in-depth to review anomalies that indicate a potential problem.

If there are critical applications or services that the clients rely on for key account revenue streams, IT can provide real-time monitoring and display of the health of the networks supporting those applications and services. It is this ability to observe, analyze, and report on the network health and patterns of usage that provides the ability to make better decisions at the speed of business that CIO’s crave.

CySight excels at network Predictive AI Baselining analytics solutions. It scales to collect, analyze, and report on Netflow datastreams of over one million flows/second. Their team of specialists have prepped, installed, and deployed over 1000 CySight performance monitoring solutions, including over 50 Fortune 1000 companies and some of the largest ISP/Telco’s in the world. A global leader and recognized by winning awards for Security and Business Intelligence at the World Congress of IT, CySight is also welcomed by Cisco as a Technology Development Partner.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Balancing Granularity Against Network Security Forensics

With the pace at which the social, mobile, analytics and cloud (SMAC) stack is evolving, IT departments must quickly adopt their security monitoring and prevention strategies to match the ever-changing networking landscape. By the same token, network monitoring solutions (NMS) developers must balance a tightrope of their own in terms of providing the detail and visibility their users need, without a cost to network performance. But much of security forensics depends on the ability to drill down into both live and historic data to identify how intrusions and attacks occur. This leads to the question: what is the right balance between collecting enough data to gain the front foot in network security management, and ensuring performance isn’t compromised in the process?

Effectively identifying trends will largely depend on the data you collect

Trend and pattern data tell Security Operations Center (SOC) staff much about their environments by allowing them to connect the dots in terms of how systems may have become compromised. However, collecting large portions of historic data requires the capacity to house it – something that can quickly become problematic for IT Departments. Netflow data analysis acts as a powerful counterweight to the problem of processing and storing chunks of data, since it collects compressed header information that is far less resource-intensive than entire packets or investigating entire device log files, for example. Also, log files are often hackers’ first victims by way of deletion or corruption as a means to disguise attacks or intrusions. With CySight’s ability to collect vast quantities of uncompromised transaction data without exhausting device resources, SOCs are able to perform detailed analyses on flow information that could reveal security issues such as data leaks that occur over time. Taking into account that Netflow security monitoring can easily be configured on most devices, and pervasive security monitoring becomes relatively easy to configure in large environments.

Netflow security monitoring can give SOCs real-time security metrics

Netflow, when retained at high granularity, can facilitate seamless detection of traffic anomalies as they occur and when coupled with smart network behavior anomaly detection (NBAD), can alert engineers when data traverses the wire in an abnormal way – allowing for both quick detection and containment of compromised devices or entire segments. Network intrusions are typically detected when data traverses the environment in an unusual way and compromised devices experience spikes in multiple network telemetry metrics. As malicious software attempts to siphon information from systems, the resultant increase in out-of-the-norm activity will trigger warnings that can bring SOC teams in the loop of what is happening. CySight employs machine learning that continuously compares multi-metric baselines against current network activity and quickly picks up on anomalies overlooked by other flow solutions, even before they constitute a system-wide threat. This type of behavioral analysis of network traffic places security teams on the front foot in the ongoing battle against malicious attacks on their systems.

Network metrics are being generated on a big data scale

Few things can undermine a network’s performance and risk more than a monitoring solution that strains to provide anticipated visibility. However, considering the increasing complexity of distributed connected assets and the ways and speed in which people and IoT devices are being plugged into networks today, pervasive and detailed monitoring is absolutely crucial. Take the bring your own device (BYOD) phenomenon and the shift to the cloud, for example. Networking and security teams need visibility into where, when, and how mobile phones, tablets, smart watches, and IoT devices are going on and offline and how to better manage the flow of data to and from user devices. Mobile devices increasingly run their own versions of business applications and with BYOD cultures somewhat undermining IT’s ability to dictate the type of software allowed to run on personal devices, the need to monitor traffic flow from such devices – from both a security and a performance perspective – becomes clear.

General Netflow performance analytics tools are capable of informing NOC teams about how large IP traffic flows between devices, with basic usage statistics on a device or segment level. However, when network metrics are generated on a big data scale, traffic anomalies that require SOC investigation get lost in leaky bucket sorting algorithms of basic tools. Detecting the real underlying reasons for traffic degradation or identifying risky communications such as Ransomware, DDoS, slowDoS, peer-to-peer (p2p), the dark web (ToR), and having complete historical visibility to trackback undesirable applications become absolutely critical, but far less difficult, with CySight’s ability to easily provide information on all of the traffic that traverses the environment.

NetFlow security monitoring evolves alongside technology organically

Thanks to Netflow and the unique design and multi-metric approach that CySight has implemented, as systems evolve at an increasing rate, it doesn’t mean you need to re-invent your security apparatus every six months or so. CySight’s ubiquity, reliability, and flexibility give NOC and SOC teams deep visibility minus the administrative overheads in getting it up and running along with collecting and benefiting from big flow data’s deep insights. You can even fine-tune your monitoring to give you the right granularity you need to keep your systems safe, secure, and predictable. This results in fewer network blind spots that often act as the Achilles Heel of the modern security and network experts.

On the other end of the scale, Netflow analyzers – in their varying feature sets – give NOCs some basic ability to collect, analyze, and detect from within-the-top bandwidth metrics which some engineers may still believe is the most pertinent to their needs. Once you’ve decided on the data you need today whilst keeping an eye on what you need tomorrow, it’s now time to choose the collector that does the job best.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

What is NetFlow & How Can Organizations Leverage It?

NetFlow is a feature originally introduced on Cisco devices (but now generally available on many vendor devices) which provides the ability for an organization to monitor and collect IP network traffic entering or exiting an interface.
Through analysis of the data provided by NetFlow, a network administrator is able to detect things such as the source and destination of traffic, class of service, and the causes of congestion on the network.

NetFlow is designed to be utilized either from the software built into a router/switch or from external probes.

The purpose of NetFlow is to provide an organization with information about network traffic flow, both into and out of the device, by analyzing the first packet of a flow and using that packet as the standard for the rest of the flow. It has two variants which are designed to allow for more flexibility when it comes to implementing NetFlow on a network.

NetFlow was originally developed by Cisco around 1990 as a packet switching technology for Cisco routers and implemented in IOS 11.x.

The concept was that instead of having to inspect each packet in a “flow”, the device need only to inspect the first packet and create a “NetFlow switching record” or alternatively named “route cache record”.

After that that record was created, further packets in the same flow would not need to be inspected; they could just be forwarded based on the determination from the first packet. While this idea was forward thinking, it had many drawbacks which made it unsuitable for larger internet backbone routers.

In the end, Cisco abandoned that form of traffic routing in favor of “Cisco Express Forwarding”.

However, Cisco (and others) realized that by collecting and storing / forwarding that “flow data” they could offer insight into the traffic that was traversing the device interfaces.

At the time, the only way to see any information about what IP addresses or application ports were “inside” the traffic was to deploy packet sniffing systems which would sit inline (or connected to SPAN/Mirror) ports and “sniff” the traffic.  This can be an expensive and sometimes difficult solution to deploy.

Instead, by exporting the NetFlow data to an application which could store / process / display the information, network managers could now see many of the key meta-data aspects of traffic without having to deploy the “sniffer” probes.

Routers and switches which are NetFlow-capable are able to collect the IP traffic statistics at all interfaces on which NetFlow is enabled. This information is then exported as NetFlow records to a NetFlow collector, which is typically a server doing the traffic analysis.

There are two main NetFlow variants: Security Event Logging and Standalone Probe-Based Monitoring.

Security Event Logging was introduced on the Cisco ASA 5580 products and utilizes NetFlow v9 fields and templates. It delivers security telemetry in high performance environments and offers the same level of detail in logged events as syslog.

Standalone Probe-Based Monitoring is an alternative to flow collection from routers and switches and uses NetFlow probes, allowing NetFlow to overcome some of the limitations of router-based monitoring. Dedicated probes allow for easier implementation of NetFlow monitoring, but probes must be placed at each link to be observed and probes will not report separate input and output as a router will.

An organization or company may implement NetFlow by utilizing a NetFlow-capable device. However, they may wish to use one of the variants for a more flexible experience.

By using NetFlow, an organization will have insight into the traffic on its network, which may be used to find sources of congestion and improve network traffic flow so that the network is utilized to its full capability.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Seven Reasons To Analyze Network Traffic With NetFlow

NetFlow allows you to keep an eye on traffic and transactions that occur on your network. NetFlow can detect unusual traffic, a request for a malicious destination or a download of a larger file. NetFlow analysis helps you see what users are doing, gives you an idea of how your bandwidth is used and can help you improve your network besides protecting you from a number of attacks.

There are many reasons to analyze network traffic with NetFlow, including making your system more efficient as well as keeping it safe. Here are some of the reasons behind many organizations  adoption of NetFlow analysis:

  • Analyze all your network NetFlow allows you to keep track of all the connections occurring on your network, including the ones hidden by a rootkit. You can review all the ports and external hosts an IP address connected to within a specific period of time. You can also collect data to get an overview of how your network is used.

 

  • Track bandwidth use. You can use NetFlow to track bandwidth use and see reports on the average use of This can help you determine when spikes are likely to occur so that you can plan accordingly. Tracking bandwidth allows you to better understand traffic patterns and this information can be used to identify any unusual traffic patterns. You can also easily identify unusual surges caused by a user downloading a large file or by a DDoS attack.

 

  • Keep your network safe from DDoS attacks. These attacks target your network by overloading your servers with more traffic than they can handle. NetFlow can detect this type of unusual surge in traffic as well as identify the botnet that is controlling the attack and the infected computers following the botnet’s order and sending traffic to your network. You can easily block the botnet and the network of infected computers to prevent future attacks besides stopping the attack in progress.

 

  • Protect your network from malware. Even the safest network can still be exposed to malware via users connecting from home or via people bringing their mobile device to work. A bot present on a home computer or on a Smartphone could access your network but NetFlow will detect this type of abnormal traffic and with auto-mitigation tools automatically block it.
  • Optimize your cloud. By tracking bandwidth use, NetFlow can show you which applications slow down your cloud and give you an overview of how your cloud is used. You can also track performances to optimize your cloud and make sure your cloud service provider is offering a cloud solution that corresponds to what they advertised.
  • Monitor users. Everyone brings their own Smartphone to work nowadays and might use it for purposes other than work. Company data may be accessible by insiders who have legitimate access but have an inappropriate agenda downloading and sharing sensitive data with outside sources. You can keep track of how much bandwidth is used for data leakage or personal activities, such as using Facebook during work hours.
  • Data Retention Compliance. NetFlow can fill in the gaps where other technologies cannot deliver. A well-architected NetFlow solution can help business and service providers to achieve and maintain data retention compliance for a wide range of government and industry regulations.

NetFlow is an easy way to monitor your network and provides you with several advantages, including making your network safer and collecting the data you need to optimize it. Having access to a comprehensive overview of your network from a single pane of glass makes monitoring your network easy and enables you to check what is going on with your network with a simple glance.

CySight solutions takes the extra step to make life far easier for the network and security professional with smart alerts, actionable network intelligence, scalability and automated diagnostics and mitigation for a complete technology package.

CySight can provide you with the right tools to analyze traffic, monitor your network, protect it and optimize it. Contact us  to learn more about NetFlow and how you can get the most out of this amazing tool.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Deploying NetFlow as a Countermeasure to Threats like CNB

Few would debate legendary martial artist Chuck Norris’ ability to take out any opponent with a quick combination of lightning-fast punches and kicks. Norris, after all, is legendary for his showdowns with the best of fighters and being the last man standing in some of the most brutal and memorable fight scenes. It’s no surprise, then, that hackers named one of their most dubious botnet attacks after “tough guy” Norris, which wreaked havoc on internet routers worldwide. The “Chuck Norris” botnet, or CNB, was strategically designed to target poorly configured Linux MIPS systems, network devices such as routers, CCTV cameras, switches, Wifi modems, etc. In a study on CNB, the University of Masaryk in the Czech Republic, examined the attack’s inner workings and demonstrated how it employed Netflow as a countermeasure to actively detect and incapacitate the threat.

Lets look at what gave CNB its ability to infiltrate key networking assets and how, through flow-based monitoring, proactive detection made it possible to thwart the threat and others like it.

What made the Chuck Norris attack so potentially devastating?

What made the CNB attack so menacing was its ability to access all network traffic by infiltrating routers, switches and other networking hardware. This allowed it to go undetected for long periods, whereby it was capable of spreading through networks fairly quickly. As Botnet attacks “settle in”, they start issuing commands and take control of compromised devices, known as “bots”, that act as launch pads for Denial of Service (DoS) attacks, illegal SMTP relays, theft of information, etc.

Deploying Netflow as a countermeasure to threats like CNB

In the case of the CNB attack, Netflow collection data revealed how it infiltrated devices on TELNET and SSH ports, DNS Spoofs and web browser vulnerabilities, enabling Security teams to track its distribution on servers to avoid further propagation. Netflow’s deep visibility into network traffic gave Security teams the forensics they needed to effectively detect and incapacitate CNB.

Analysts are better positioned to mitigate risk to the network and its data through flow-based security forensics applied at the granular level coupled with dynamic behavioral and reputation feeds. Only with sufficient granularity and historic visibility can the risk of an anomaly be better diagnosed and mitigated. Doing so helps staff identify breaches that occur in real-time, as well as data leaks that take place over a prolonged period.

Flow-based monitoring solutions can collect vast amounts of security, performance and other data directly from networking infrastructure, giving Network Operations Centers (NOCs) a more comprehensive view of the environment and events as they occur. In addition, certain flow collectors are themselves resilient against cyber attacks such as DDoS. NetFlow technology isn’t only lightweight in terms of resource demands on switches and routers, but also highly fault-tolerant and limits exposure to flow floods including collection tuning, self-maintaining collection tuning rules and other self-healing capabilities.

As a trusted source of deep network insights built on big data analysis capabilities, Netflow provides NOCs with an end-to-end security and performance monitoring and management solution. For more information on Netflow as a performance and security solution for large-scale environments, download our free Guide to Understanding Netflow.

Cutting-edge and innovative technologies like CySight delivers the deep end-to-end network visibility and security context required assisting in speedily impeding harmful attacks.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

Why NetFlow is Perfect for Forensics and Compliance

Netflow forensic investigations can produce the report evidence that can be used in court as it describes the movement of the traffic data even without necessarily describing its contents.

It’s therefore crucial that the Netflow solution deployed can scale in archival to allow full context of all the flow data and not just the top of the data or the data relating to one tools idea of a security event.

The issue with Forensics and flow data is that in order to achieve full compliance its necessary to retain a data warehouse that can eventuate in a huge amount of flow records.

These records, retained in the data warehouse may not seem important at the time of collection but become critical to uncover behavior that may have been occurring over a long period and to ascertain the damage of the traffic behavior. I am talking broadly here as there are so many different instances where the data suddenly becomes critically important and it’s hard to do it justice by explaining one or two case studies. Remember you don’t know what you don’t know but when you discover what you didn’t know you need to have the ability to quantify the loss or the risk of loss.

How much flow data is enough to retain to satisfy compliance?

From our experience it is usually between 3-24 months depending on the size of the environment and the legal compliance relating to data protection or data retention. For most corporates we would recommend 12 months as a best practice. Data retention in ISP land in some countries requires the ability to analyze traffic for up to 2 years. Fortunately disk today is cheap and flow is cost effective to deploy across the organization. There is more information about this in our Performance and Security eBook.

Once a security issue has been identified the flow database can be available to quantify exactly what IP’s accessed a system, the times the system was accessed as well as quantifying the impact on dependent systems that the host conversed with directly or indirectly on the network before and after the issue.

Trawling through huge collection of flow-data can be a lengthy task and its necessary to have the ability to run automated Predictive AI Baselining analytics and parallel Predictive AI Baselining analytics to gauge damage from a long term inside threat that could have been dribbling out your intellectual property slowly over a few months.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

3 Ways Anomaly Detection Enhances Network Monitoring

With the increasing abstraction of IT services beyond the traditional server room computing environments have evolved to be more efficient and also far more complex. Virtualization, mobile device technology, hosted infrastructure, Internet ubiquity and a host of other technologies are redefining the IT landscape.

From a cybersecurity standpoint, the question is how to best to manage the growing complexity of environments and changes in network behavior with every introduction of new technology.

In this blog, we’ll take a look at how anomaly detection-based systems are adding an invaluable weapon to Security Analysts’ arsenal in the battle against known – and unknown – security risks that threaten the stability of today’s complex enterprise environments.

Put your network traffic behavior into perspective

By continually analyzing traffic patterns at various intersections and time frames, performance and security baselines can be established, against which potential malicious activity is monitored and managed. But with large swathes of data traversing the average enterprise environment at any given moment, detecting abnormal network behavior can be difficult.

Through filtering techniques and algorithms based on live and historical data analysis, anomaly detection systems are capable of detecting even the most subtly crafted malicious software that may pose as normal network behavior. Also, anomaly-based systems employ machine-learning capabilities to learn about new traffic as it is introduced and provide greater context to how data traverses the wire, thus increasing its ability to identify security threats as they are introduced.

Netflow is a popular tool used in the collection of network traffic for building accurate performance and cybersecurity baselines with which to establish normal network activity patterns from potentially alarming network behavior.

Anomaly detection places Security Analysts on the front foot

An anomaly is defined as an action or event that is outside of the norm. But when a definition of what is normal is absent, loopholes can easily be exploited. This is often the case with signature-based detection systems that rely on a database of pre-determined virus signatures that are based on known threats. In the event of a new and yet unknown security threat, signature-based systems are only as effective as their ability to respond to, analyze and neutralize such new threats.

Since signatures do work well against known attacks, they are by no means paralyzed against defending your network. Signature-based systems lack the flexibility of anomaly-based systems in the sense that they are incapable of detecting new threats. This is one of the reasons signature-based systems are typically complemented by some iteration of a flow based anomaly detection system.

Anomaly based systems are designed to grow alongside your network

The chief strength behind anomaly detection systems is that they allow Network Operation Centers (NOCs) to adapt their security apparatus according to the demands of the day. With threats growing in number and sophistication, detection systems that can discover, learn about and provide preventative methodologies  are the ideal tools with which to combat the cybersecurity threats of tomorrow. NetFlow Anomaly detection with automated diagnostics does exactly this by employing machine learning techniques to network threat detection and in so doing, automating much of the detection aspect of security management while allowing Security Analysts to focus on the prevention aspect in their ongoing endeavors to secure their information and technological investments.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Identifying ToR threats without De-Anonymizing

Part 3 in our series on How to counter-punch botnets, viruses, ToR and more with Netflow focuses on ToR threats to the enterprise.

ToR (aka Onion routing) and anonymized p2p relay services such as Freenet is where we can expect to see many more attacks as well as malevolent actors who are out to deny your service or steal your valuable data. Its useful to recognize that flow Predictive AI Baselining analytics provides the best and cheapest means of de-anonymizing or profiling this traffic.

“The biggest threat to the Tor network, which exists by design, is its vulnerability to traffic confirmation or correlation attacks. This means that if an attacker gains control over many entry and exit relays, they can perform statistical traffic analysis to determine which users visited which websites.” (source)

According to a paper entitled “On the Effectiveness of Traffic Analysis Against Anonymity Networks Using Flow Records” by Sambuddho Chakravarty, Marco V. Barbera,, Georgios Portokalidis, Michalis Polychronakis, and Angelos D. Keromytis they point out that in the lab they can qualify that “81 Percent of Tor Users Can Be Hacked with Traffic Analysis Attack”.

It continues to be a cat and mouse game that requires both new innovative approaches to find ToR weaknesses coupled with correlation attacks to identify routing paths. To do this in real life is becoming much simpler but the real challenge is that it requires cooperation and coordination of business, ISPs and governments. The deployment of cheap and easy to deploy micro-taps that can act both as a ToR relay and a flow exporter concurrently combined with a NetFlow toolset that can scale hierarchically to analyze flow data with path analysis at each point in parallel across a multitude of ToR relays can make this task easy and cost effective.

So what can we do about ToR today?

Even without de-anonymizing ToR traffic there is a lot of intelligence that can be gained simply by analyzing ToR Exit and relay behavior. Using a flow tool that can change perspectives between flows, packets, bytes, counts or tcp flag counts allows you to qualify if a ToR node is being used to download masses of data or is trickling out data.

Patterns of data can be very telling as to what is the nature of the data transfer and can be used in conjunction with other information to become a useful indicator of the risk. As for supposedly secured networks I can’t think of any instance where ToR/Onion routing or for that matter any external VPN or Proxy service is needed to be used from within what is supposed to be a locked environment. Once ToR traffic has been identified communicating in a sensitive environment it is essential to immediately investigate and stop the IP addresses engaging in this suspicious behavior.

Using a tool like CySight’s advanced End-Point Threat Detection allows NetFlow data to be correlated against hundreds of thousands of IP addresses of questionable reputation including ToR exits and relays in real-time with comprehensive historical forensics that can be deployed in a massively parallel architecture.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

How to counter-punch botnets, viruses, ToR and more with Netflow (Pt. 2)

Data Retention Compliance

End-Point Profiling

Hosts that communicate with more than one known threat type should be designated a high risk and repeated threat breaches with that hosts or dependent hosts can be marked as repeat offenders and provide an early warning system to a breach or an attack.

It would be negligent of me not to mention that the same flow-based End-Point threat detection techniques can be used as part of Data Retention compliance. In my opinion this enables better individual privacy with the ability to focus on profiling known bad end-points and be used to qualify visitors to such known traffic end-points that are used in illicit p2p swap sessions or access to specific kinds of subversive or dangerous sites that have been known to host such traffic in the past.

Extreme examples of end-point profiling could be to identify a host who is frequently visiting known jihadist web sites or pedophiles using p2p to download from peers that have been identified by means of active agents to carry child abuse material. The individual connection could be considered a coincidence but multiple visitations to multiple end-points of a categorized suspicious nature can be proven to be more than mere coincidence and provide cause for investigation.

Like DDoS attack profiles there may be a prolific amount of end-points involved and an individual conversation is difficult to spot but analysis of the IP’s involved in multiple transactions based on the category of the end-point will allow you to uncover the “needles in the haystack” and to enable sufficient evidence to be uncovered.

Profiling Bad traffic

End-Point Threat detection on its own is insufficient to detecting threats and we can’t depend on blacklists when a threat morphs faster than a reputation list can be updated. It is therefore critical to concurrently analyze traffic using a flow behavior anomaly detection engine.

This approach should be able to learn the baselines of your network traffic and should have the flexibility to baseline any internal hosts that your risk management teams deem specifically important or related such as a specific group of servers or high-risk interfaces and so-forth enabling a means to quantify what is normal and to identify baseline breaches and to perform impact analysis.

This is where big-data machine learning comes into play as to fully automate the forensics process of analyzing a baseline breach automating baselines and automatically running diagnostics and serving up the Predictive AI Baselining analytics needed to quickly identify the IP’s that are impacting services to provide extreme visibility and if desired mitigation.

Automated Diagnostics enable security resources to be focused on the critical issues while machine learning processes continue to quantify the KPI’s of ongoing issues bringing them to the foreground quickly taking into account known blacklists, whitelists and repeat offenders.

As a trusted source of deep network insights built on big data analysis capabilities, Netflow provides NOCs with an end-to-end security and performance monitoring and management solution. For more information on Netflow as a performance and security solution for large-scale environments, download our free Guide to Understanding Netflow.

Cutting-edge and innovative technologies like CySight delivers the deep end-to-end network visibility and security context required assisting in speedily impeding harmful attacks.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

Two Ways Networks Are Transformed By NetFlow

According an article in techtarget.com “Your routers and switches can yield a mother lode of information about your network–if you know where to dig.”  The article goes on to say that excavating and searching through endless traffic data and logs manufactured by your network system is a lot like mining for gold, and punching random holes to look for a few nuggets of information isn’t very efficient. Your search will be much more fruitful if you know where to look and what it will look like. Fortunately, the data generated by a NetFlow traffic reporting protocol yields specific information and you can easily sort, view and analyze the information into what you want to use or need.In contemporary networks, there is a need to collect and retain a good set of traffic records for several different purposes. These include the ability to monitor traffic for network planning, security and analysis as well as track traffic usage for billing purposes. Every business experiences network problems. The goal is to transform these “bad behaving” networks by investigating the data that is being generated by the routers, switches and other hardware that make up the system.

  • Trace and repair network misconfigurations

Problems with networks can run the gamut from mismatched applications and hardware to wireless access points opened to accommodate BYOD users and other business uses. While there is always talk about software flaws and news about the latest internet threat, those things often distract IT pros from the real, every-day threat of unstable networks that have been configured to accommodate legacy hardware and a multitude of software applications.

The increasing complexity of the Internet itself, with the interconnection of lots of different devices and device types adds to the challenge of operating a computer network. Even though developing protocols to respond to unpredicted failures and misconfigurations is a workable solution, these out-of-date configurations can still cause frequent problems and denial of service (DOS). With many modern network devices monitoring functions and gathering data, retrieving and utilizing the NetFlow information makes tracing and repairing the problem of misconfigurations possible, easier and efficient.

  • Detect security breaches

There are many uses for NetFlow but one of the most important is the benefit of network security. This quote from an article by Wagner and Bernhard, describing worm and anomaly detection in fast IP networks, bears out the security problems facing governments, businesses, and internet users today.

“Large-scale network events, such as outbreaks of a fast Internet worm are difficult to detect in real-time from observed traffic, even more so if the observed network carries a large amount of traffic. Even during worm outbreaks, actual attack traffic is only a small fraction of the overall traffic. Its precise characteristics are usually unknown beforehand, making direct matching hard. It is desirable to derive parameters that describe large amounts of traffic data in such a way that details are hidden, but the type of changes associated with worm outbreaks and other network events are visible.”

NetFlow provides a 24/7 account of all network activity. There is an “unblinking” eye observing anything and everything that happens within the network boundaries. All the data needed to identify and enact a clean-up is recorded in the flow and this is invaluable to a security pro trying to reduce the impact of a breach in the network. NetFlow provides a visible, “what’s happening right now” view that other systems cannot provide. Most security systems alert after something has been detected, while NetFlow is constantly gathering information even when things seem to be just fine. In addition, NetFlow-based analysis relies on traffic behavior and algorithms which provides rapid detection of breaches that other technologies often miss

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

How Traffic Accounting Keeps You One Step Ahead Of The Competition

IT has steadily evolved from a service and operational delivery mechanism to a strategic business investment. Suffice it to say that the business world and technology have become so intertwined that it’s unsurprising many leading companies within their respective industries attribute their success largely to their adoptive stance toward innovation.

Network Managers know that much of their company’s ability to outmaneuver the competition depends to a large extent on IT Ops’ ability to deliver world-class services. This brings traffic accounting into the conversation, since a realistic and measured view of your current and future traffic flows is central to building an environment in which all the facets involved in its growth, stability and performance are continually addressed.

In this blog, we’ll take a look at how traffic accounting places your network operations center (NOC) team on the front-foot in their objective to optimize the flow of your business’ most precious cargo – its data.

All roads lead to performance baselining 

Performance baselines lay the foundation for network-wide traffic accounting against predetermined environment thresholds. They also aid IT Ops teams in planning for network growth and expansion undertakings. Baseline information typically contains statistics on network utilization, traffic components, conversation and address statistics, packet information and key device metrics.

It serves as your network’s barometer by informing you when anomalies such as excessive bandwidth consumption and other causes of bottlenecks occur. For example, root causes to performance issues can easily creep into an environment unnoticed: such as a recent update to a business critical application that may cause significant spikes in network utilization.  Armed with a comprehensive set of baseline statistics and data that allows Network Performance and Security Specialists to measure, compare and analyze network metrics,   root causes such as these can be identified with elevated efficiency.

In broader applications, baselining gives Network Engineers a high-level view of their environments, thereby allowing them to configure Quality of Service (QoS) parameters, plan for upgrades and expansions, detect and monitor trends and peering analysis and a bevy of other functions.

Traffic accounting brings your future network into focus

With new-generation technologies such as the cloud, resource virtualization, as a service platforms and mobility revolutionizing the networks of yesteryear, capacity planning has taken on a new level of significance. Network monitoring systems (NMS) need to meet the demands of the new, complex, hybrid systems that are the order of the day. Thankfully, technologies such as NetFlow have evolved steadily over the years to address the monitoring demands of modern networks. NetFlow accounting is a reliable way to peer through the wire and get a deeper insight to the traffic that traverses your environment. Many Network Engineers and Security Specialists will agree that their understanding of their environments hinges on the level of insight they glean from their monitoring solutions.

This makes NetFlow an ideal traffic accounting medium, since it easily collects and exports data from virtually any connected device for analysis by a CySight . The technology’s standing in the industry has made it the “go-to” solution for curating detailed, insightful and actionable metrics that move IT organizations from a reactive to proactive stance towards network optimization

Traffic accounting’s influence on business productivity and performance

As organizations become increasingly technology-centric in their business strategies, their reliance on networks that consistently perform at peak will increase accordingly. This places new pressures on Network Performance and Security Teams  to conduct iterative performance and capacity testing to contextualize their environment’s ability to perform when it matters most. NetFlow’s ability to provide contextual insights based on live and historic data means Network Operation Centers (NOCs)  are able to react to immediate performance hindrances and also predict with a fair level of accuracy what the challenges of tomorrow may hold. And this is worth gold in the context of the ever-changing and expanding networking landscape.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Integrated Cyber Network Intelligence: Your Network has been infiltrated. How do you know where and what else is impacted?

Why would you need Granular Network Intelligence?

“Advanced targeted attacks are set to render prevention-centric security strategies obsolete and that information must become the focal point for our information security strategies.” (Gartner)

In this webinar we take a look at the internal and external threat networks pervasive in todays enterprise and explore why organizations need granular network intelligence.

Webinar Transcription:

I’m one of the senior engineers here with CySight. I’ll be taking you through the webinar today. It should take about 30 to 40 minutes, I would say and then we will get to some questions towards the end. So let’s get started.

So the first big question here is, “Why would you need something like this? Why would you need Granular Network Intelligence?” And the answer, if not obvious already, is that, really, in today’s connected world, every incident response includes a communications component. What we mean by that is in a managed environment, whether it’s traditional network management or security management, anytime that there’s an alert or some sort of incident that needs to be responded to, a part of that response is always going to be communications, who’s talking to who, what did they do, how much bandwidth did they use, who did they talk to?

And in a security particular environment, we need to be looking at things like whether external threats or internal threats, was there a data breach, can I look at the historical behavior or patterns, can I put this traffic into context as per the sort of baseline of that traffic? So that insight into how systems have communicated is critical.

Just some background industry kind of information. According to Gartner, targeted attacks are set to render prevention-centric security strategies obsolete by 2020. Basically, what that means is there’s going to be a shift. They believe there’s going to be a shift to information and end-user-centric security focused on an infrastructure’s end-points and away from sort of the blocking and tackling of firewalls. They believe that there’ll be three big trends continuous compromise, meaning that an increased in level of advanced attacks, targeted attacks. It’s not going to stop. You’re never going to feel safe that someone won’t be potentially trying to attack you.

And most of those attacks will become financially motivated attacks, attempts to steal information and attempts to gather credit card data, if you have that, intellectual property, ransomware-type attacks. So this is not necessarily, “Hey, I’m just going to try and bring down your website or something,” in a traditional world where maybe people are playing around a little bit. This is more organized attacks specifically designed to either elicit a ransom or a reward or just steal information that could be turned into money out in a black market and it’s going to be more and more difficult for IT to have control over those end-user’s devices.

Again, very few organizations just have people sitting at their desks with desktop computers anymore. Everybody’s got laptops. Everybody’s got a phone or other tablet that’s moving around. People work from home. They work from the road. They’re connecting in to network resources from anywhere in the world at any time and it becomes more and more challenging for IT to sort of control those pathways of communications. So if you can’t control it, then you have to certainly be able to monitor it and react to it and the reaction is really in three major ways; determining the origin of the attack, the nature of the attack, and the damage incurred.

So we’re certainly assuming that there are going to be attacks, and we need to know where they’re coming from, what they’re trying to do, and have they been able to get there? You know, have we caught it in time or has something already been infected or has information been taken away from the network and that really leads us into this little graphic that we have about not being in denial. Understanding that, unfortunately, many people, in terms of their real visibility into the network, are somewhere in the blind or limited-type area. They don’t know what they don’t know, they think they should know but they don’t know, and etc.

But where they really need to be is at, “There’s nothing they don’t know.” And they need tools to be able to move them from wherever they are into this upper left-hand quadrant and certainly, that’s what our product is designed to do. So just kind of looking at the entire landscape of information flow from outside and inside and really understanding that there are new kinds of attacks, crawlers, botnets, ransomware, ToR, DoS and DDoS attacks that have been around for a while.

Your network may be used to download or host illicit material, leak intellectual property, be part of an attack, you know, something that’s command and controlled from somewhere else and your internal assets have become zombies and are being controlled by outside. There are lots of different threats. They’re all coming at you from all over the place. They’re all trying to get inside your network to do bad things and those attacks or that communication needs to be tracked.

Gartner also believes that 60% of enterprise security budgets will be allocated for rapid detection and response by 2020, up from less than 10% just a few years ago. What they believe is that too much of the spending has gone into prevention and not enough has gone into monitoring and response. So the prevention is that traditional firewalling, intrusion detection or intrusion prevention, things like that, which certainly is important. I’m not saying that those things aren’t useful or needed. But what we believe and what other industry analysts certainly believe is that that’s not enough, basically. There needs to be more than the simple sort of “Put up a wall around it and no one will be able to get in” kind of situation. If that were the case, then there would be no incidents anywhere because everybody’s got a firewall; large companies, small companies. Everybody’s got that today, and yet, you certainly don’t go more than a couple of days without hearing about new hacks, new incidents.

Here in the United States, we just came through an election where they’re still talking about people from other countries hacking into one party or another’s servers to try and change the election results. You know, on the enterprise side, there are lots and lots of businesses. Yahoo recently in the last couple of months certainly had a major attack that they had to come clean about it and of course both of those organizations, certainly Yahoo, you know, they’re an IT system. They have those standard intrusion prevention and firewall-type systems, but obviously, they aren’t enough.

So when you are breached, you need to be able to look and see what happened, “What can I still identify, what can I still control, and how do I get visibility as to what happened.” So for us, we believe that the information about the communication is the most important focal point for a security strategy and we can look at a few different ways to do that without a signature-based mechanism. So there’s ways to look at normal traffic and be able to very rapidly identify deviation from normal traffic. There’s ways to find outliers and repeat offenders. There’s ways to find nefarious traffic by correlating real-time threat feeds with current flows and we’re going to be talking about all of these today so that a security team can identify what was targeted, what was potentially compromised, what information may have left the building, so to speak.

There’s a lot of challenges faced by existing firewalls, SIEM, and loosely-coupled toolsets. The level of sophistication, it’s going up and up again. It’s becoming more organized. It’s an international crime syndicate with very, very intelligent people using these tactics to try and gain money. As we’ve talked about, blocking attack, laying end-point solutions are just not enough anymore and of course, there’s a huge cost in trying to deploy, trying to maintain multiple solutions.

So being able to try and have some tools that aren’t incredibly expensive, that do give you valuable information really, can become the best way to go. If you look at, say, what we’re calling sensors; packet captures, DPI-type systems. They, certainly, can do quite a lot, but they’re incredibly expensive to deploy across a large organization. If you’re trying to do packet capture, it’s very, very prohibitive. You can get a lot of detail, but trying to put those sensors everywhere is just… unless you’ve got an unlimited budget, and very few people do, that becomes a really difficult proposition to swallow.

But that doesn’t mean NetFlow can’t still use that kind of information. What we have found and what’s really been a major trend over the last couple of years is that existing vendors, on their devices, Check Point, Cisco, Palo Alto, packet brokers like Ixia, or all of the different people that you see up here, and more and more all the time, are actually adding that DPI information into their flow data. So it’s not separate from flow data. It’s these devices that have the packets going through them that can look at them all the way to layer seven and then include that information in the NetFlow export out to a product like ours that can collect it and display that.

So you can look into payload and classify according to payload content identifying traffic on port 80 or what have you, that you can connect the dots between inside and outside when there’s NAT. To be able to read the URLs and quickly analyze where they’re going and what they’re being used for. Getting specialized information like MAC address information or, if it’s a firewall, getting denial information or AAA information, if it’s a wireless LAN controller, getting SSID information, and other kinds of things that can be very useful to track down where people were talking.

So different types of systems are adding different kinds of information to the exports, but all of them, together, really effectively give you that same capability as if you had those sniffing products all over the place or packet capture products all over the place. But you can do it right in the devices, right from the manufacturer, send it through NetFlow, to us, and still get that quality information without having to spend so much money to do it.

The SANS organization, if you’re not familiar with them, great organization, provide a lot of good information and whitepapers and things like that. They have, very often, said that NetFlow might be the single most valuable source of evidence in network investigations of all sorts, security investigations, performance investigations, whatever it may be.

The NetFlow data can give you very high value intelligence about the communications. But the key is in understanding how to get it and how to use it. Some other benefits of using NetFlow, before packet capture is the lack of need for huge storage requirements. Certainly, as compared to traditional packet capture, NetFlow is much skinnier than that and you can store much longer-term information than you could if you had to store all of the packets. The cost, we’ve talked about.

And there are some interesting things like legal issues that are mitigated. If you are actually capturing all packets, then you may run into compliance issues for things like PCI or HIPAA. In certain different countries and jurisdictions around the world have very strict regulations about maintaining the end-data and keeping that data. NetFlow, you don’t have that. It’s metadata. Even with the new things that you can get, that we talked about a couple of slides ago, it’s still the metadata. It’s still data about the data. It’s not the actual end information. So even without that content, NetFlow still provides an excellent means of guiding the investigations, especially in an attack scenario.

So here, if you bundle everything that we’ve talked about so far into one kind of view and relate it to what we do here at CySight. You would see it on this screen. There are the end-users of people/content and things today, the Internet of things. So you’ve got data coming from security cameras and Internet-connected vehicles and refrigerators. It could be just about anything, environmental-type information. It’s all producing data. That data is traversing the network through multiple different types of platforms, or routers, switches, servers, wireless LAN controllers, cloud-based systems and so forth, all of which can provide correlation of the information and data. We call that the correlation API.

We then take that data into CySight. We combine it with outside big data, we’re going to talk about that in a minute, so not only the data of the connections but actual third-party information that we have related to known bad actors in the world and then we can use that information to provide you, the user, multiple benefits, whether it’s anomaly detection, threat intelligence, security performance, network accounting, all of the sort of standard things that you would do with NetFlow data.

And then lastly, integrate that data out to other third-party systems, whether it’s your managed service provider or security service provider. It could be upstream event collectors, trappers, log systems, SOAPA ecosystems, whether that’s on-premise or in the cloud or hybrid cloud. All of that is available via our product. So it starts at the traffic level. It goes through everything. It provides the data inside our product and as well as integrates out to third-party systems.

So let’s actually look into this a little more deeply. So the threat intelligence information is one of the two major components of our cyber security areas. One, the way this works is that threat data is derived from a large number of sources. So we maintain a list, effectively, a database of known bad IP addresses, known bad actors in the world. We collect that data through honeypots, and threat feeds, and crowd sources, and active crawlers, and our own internal user cyber feedback from our customers and all of that information combined allows us to maintain a very robust list of known bads, basically. Then we can combine that cyber intelligence data with the connection data, the flow data, the session data, inside and outside of your network, you know, the communications that you’re having, and compare the two.

So we have the big data threats. We can process that data along with what’s happening locally in your network to provide extreme visibility, to find who’s talking to who, what conversations are your users having with bad actors, ransomware, botnets, ToR, hacking, malware, whatever it may be and we then provide, of course, that information to you directly in the product. So we’re constantly monitoring for that communication and then we can help you identify it and remediate it as soon as possible.

As we look into this a little bit   zoomed in here a little bit, you can see that that threat information can be seen in summary or in detail. We have it categorized by different threat levels, types, severities, countries of origin, affected IPs, threat IPs. As anyone who’s used our product in the past knows, we always provide an extreme amount of flexibility to really slice and dice the data and give you a view into it in any way that is best consumed by you. So you can look at things by type, or by affected IP, or by threat IP, or by threat level, or whatever it may be and of course, no matter where you start, you can always drill in, you can filter, you can re-display things to show it in a different view.

Here’s an example of identifying some threat. These are ransomware threats, known ransomware IPs out there. I can very easily just right-click on that and say, “Show me the affected IP.” So I see that there’s ransomware. Who’s affected by that? Who is actually talking to that? And it’s going to drill right down into that affected IP or maybe multiple affected IPs that are known to be talking to those ransomware systems outside. You could see when it happened. You can see how much traffic.

Certainly, in this example our top affected IP here certainly has a tremendous amount of data, 307 megs over that time period, much more than the next ones below that and so that’s clearly one that needs to be identified or responded to very quickly. It can be useful to look at this way, to see if, “Hey,” you know, “Is this one system that’s been infiltrated or is it now starting to spread? Are there multiple systems? Where is it starting? Where is it going and how can I then sort of stem that tide?” It very easy to get that kind of information.

Here’s another example showing all ransomware attack, traffic, traversing a large ISP over a day. So whether you’re an end-user or certainly a service provider, we have many, many service provider customers that use this to monitor their customer’s traffic and so this could be something that you look at to say “Across all of my ISP, where is that ransomware traffic going? Maybe it’s not affecting me but it’s affecting one of my customers.” Then we can be able to drill into that and to alert and alarm on that, potentially block that right away as extra help to my customers.

Ransomware is certainly one of the most major scary sort of things that’s out there now. It’s happening every day. There are reports of police stations having to pay ransom to get their data back, hospitals having to pay ransom to get their data back. It’s kind of interesting that, to our knowledge, there has never been a case where the ransomers, the bad guys out there haven’t actually released the information back to their customers and supply the decryption key. Because they want the money and they want people to know, “Hey, if you pay us, we will give you your data back,” which is really, really frightening, actually. It’s happening all the time and needs to be monitored very, very carefully. This is certainly one of the major threats that exist today.

But there are other threats as well; peer-to-peer traffic, ToR traffic, things like that. Here’s an example of looking at a single affected IP that is talking to multiple different threat IPs that are known to have been hosting illicit content over this time period. You could see that, clearly, it’s doing something. You know, if there is one host that is talking to one outside illicit threat IP, okay, maybe that’s a coincidence or maybe it’s not an indication of something crazy going on. But when you can see that, in this case, there’s one internal IP talking to 89 known bad threat IPs who have been known to host illicit traffic, okay, that’s not a coincidence anymore. We know that something’s happening here. We can see when it happened. We know that they’re doing something. Let’s go investigate that. So that’s just another way of kind of giving you that first step to identify what’s happening and when it’s happening.

You know, sometimes, illicit traffic may just look like some obscured peer-to-peer content but it actually…Auditor, our product allows you to see it for full forensic evidence. You know, you could see what countries are talking to, what kind of traffic it is what kind of threat level it is. It really gives you that full-detailed data about what’s happening.

Here’s another example of a ToR threat. So people who are trying to use ToR to anonymize their data or get around any kind of traffic analysis-type system will use ToR to try and obfuscate that data. But we have, as part of our threat data, a list of ToR exits and relays and proxies, and we can look at that and tell you, again, who’s sending data into this sort of the ToR world out there, which may be an indication of ransomware and other malware because they often use ToR to try and anonymize that data. But it, also, could be somebody inside the organization that’s trying to do something they shouldn’t be doing, get data out which could be very nefarious. You never want to think the worst of people but it does happen. It happens every day out there. So again, that’s another way that we can give you some information about threats.

We, also, can help you visualize the threats. Sometimes, it’s easier for those to understand by looking at a nice graphical depiction. So we can show you where the traffic is moving, with the volume of traffic, how it’s hopping around in, in this case a ToR endpoint. ToR is weird. The point of ToR is that it’s very difficult to find an endpoint from another single endpoint. But being able to visualize it together actually allows you to kind of get a hand on where that traffic may be going.

In really large service providers where, certainly, people who are interested in tracking this stuff down, they need a product that can scale. We’ve got a very, very great story about our massive scalability. We can use a hierarchical system. We can add additional collectors. We can do a lot of different things to be able to handle a huge volume of traffic, even for Tier 1-type service providers, and still provide all of this data and detail that we’ve shown so far.

A couple other examples, we just have a number of them here, of different ways that you can look at the traffic and slice and dice it. Here’s an example of top conversations. So looking for that spike in traffic, we could see that there was this big spike here, suddenly. Almost 200 gig in one hour, that’s very unusual and can be identified very, very quickly and then you can try and say, “Okay, what were you doing during that time period? How could it possibly be that that much information was being sent out the door in such a short period of time?”

We also have port usage. So we can look at individual ports that are known threats over whatever time period you’re interested in. We could see this is port 80 traffic but it’s actually connecting to known ToR exits. So that is not just web surfing. You can visualize changes over time, you can see how things are increasing over time, and you can identify who is doing that to you.

Here’s another example of botnet forensics. Understanding a conversation to a known botnet command and control server and so many times, those come through, initially, as a phishing email. So they’ll just send millions of spam emails out there hoping for somebody to click on it. When they do click on it, it downloads the command and control software and then away it goes. So you can actually kind of see the low-level continual spam happening, and then all of a sudden, when there’s a spike, you actually get that botnet information, the command and control information that starts up and from there all kinds of bad things can happen.

So identifying impacted systems that have more than one infection is a great way to really sort of prioritize who you should be looking at. We can give you that data. I could see this IP has got all kinds of different threats that it’s been communicating to and with. You know, that is certainly someone that you want to take a look at very quickly.

I talked about visualization, some. Here are a few more examples of visualizations in the product. Many of our customers use this. It’s kind of the first way that they look at the data and then drill into the actual number part of the data, sort of after the visualization. Because you could see, from a high-level, where things are going and then say, “Okay, let me check that out.”

Another thing that we do as part of our cyber bundle, if you will, is anomaly detection and what we call “Two-phased Anomaly Detection.” Most of what I’ve talked about so far has been related to threat detection, matching up those known bads to conversations or communications into and out of your network. But there are other ways to try and identify security problems as well. One of those is anomaly detection.

So anomaly detection is an ability of our product to baseline traffic in your network, lots of different metrics on the traffic. So it’s counts, and flows, and packets, and bytes, and bits per second, and so forth, TCP flags, all happening all the time. So we’re baselining all the time, hour over hour, day over day and week over week to understand what is normal and then use our sophisticated behavior-based anomaly detection, our machine learning ability to identify when things are outside the norm.

So phase one is we baseline so that we know what is normal and then alert or identify when something is outside the norm and then phase two is running a diagnostic process on those events, so understanding what was that event, when did it happen, what kind of traffic was involved, what IPs and ports were involved, what interfaces did the traffic go through, what does it possibly pretend, was it a DDoS-type attack, was it port sweeper or crawler-type attack – what was it? And then the result of that is our alert diagnostic screen like you can see in the background.

So it qualifies the cause and impact for each offending behavior. It gives you the KPI information. It generates a ticket. It allows you to integrate with other third-party SNMP traps, trap receivers so we can send our alerts and diagnostic information out as a trap to another system and so everything can be rolled up into a more manager and manager-type system, if you wish. You can intelligently whitelist traffic that is not really offensive traffic that we may have identified as an anomaly. So of course, you want to reduce the amount of false positives out there and we can help you do that.

So to kind of summarize…I think we’re just about at the end of the presentation now. To summarize, what can CySight do in our cyber intelligence? It really comes down to forensics, anomaly detection, and that threat intelligence. We can record and analyze, on a very granular level, network data even in extremely complex, large, and challenging environments. We can evaluate what is normal versus what is abnormal. We can continually monitor and benchmark your network and assets. We can intelligently baseline your network to detect activity that deviates from those baselines. We can continuously monitor for communication with IPs of poor reputation and remediate it ASAP to reduce the probability of infection and we can help you store and compile that flow information to use as evidence in the future.

You’re going to end up with, then, extreme visibility into what’s happening. You’re going to have three-phase detection. You have full alerting and reporting. So any time any of these things do happen, you can get an alert. That alert can be an email. It can be a trap out to another system as I mentioned earlier. Things can be scheduled. They’re running in the background 24/7 keeping our software’s eyes on your network all the time and then give you that forensics drill-down capability to quickly identify what’s happened, what’s been impacted, and how you can stop its spread.

The last thing we just want to say is that everything that we’ve shown today is the result of a large development effort over the last number of years. We’ve been in business for over 10 years, delivering NetFlow-based Predictive AI Baselining analytics. We’ve really taken a very heavy development exercise into security over the last few years and we are constantly innovating. We’re constantly improving. We’re constantly listening to what our customers want and need and building that into future releases of the product.

So if you are an existing customer listening to this, we’d love to hear your feedback on what we can do better. If you are potentially a new customer on this webinar, we’d love your ideas from what you’ve seen as to if that fits with what you need or if there’s other things that you would like to see in the product. We really do listen to our customers quite extensively and because of that, we have a great reputation with our customers.

We have a list of customers up here. We’ve got some great quotes from our customers. We really do play across an entire enterprise. We play across service providers and we love our customers and we think that they know that and that’s why they continue to stay with us year after year and continue to work with us to make the product even better.

So we want to thank everybody for joining the webinar today. We’re going to just end on this note that we believe that our products offer the most cost-effective approach to detect threats and quantify network traffic ubiquitously across everything that you might need in the security and cyber network intelligence arena and if you have any interest in talking to us, seeing a demo, live demo of the product, getting a 30-day evaluation of the product, we’re very happy to talk to you. Just contact us.

If you’ve got a salesperson and you want to get threat intelligence, we’re happy to enable it on your existing platform. If you are new to us, hit our website, please, at cysight.ai. Fill out the form for a trial, and somebody will get to you immediately and we’ll get you up in the system and running very, very quickly and see if we can help you identify any of these security threats that you may have. So with that, we appreciate your time and look forward to seeing you at our webinar in the future. Bye.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

The Strategic Value of Advanced Netflow for Enterprise Network Security

With thousands of devices going online for the first time each minute, and the data influx continuing unabated, it’s fair to say that we’re in the throes of an always-on culture.

As the network becomes arguably the most valuable asset of the 21st century business, IT departments will be looked at to provide not just operational functions, but, more importantly, strategic value.

Today’s network infrastructures contain hundreds of key business devices across a complex array of data centers, virtualized environments and services. This means Performance and Security Specialists are demanding far more visibility from their monitoring systems than they did only a few years ago.

The growing complexity of modern IT infrastructure is the major challenge faced by existing network monitoring (NMS) and security tools.

Expanding networks, dynamic enterprise boundaries, network virtualization, new applications and processes, growing compliance and regulatory mandates along with rising levels of sophistication in cyber-crime, malware and data breaches, are some of the major factors necessitating more granular and robust monitoring solutions.

Insight-based and data-driven monitoring systems must provide the deep visibility and early warning detection needed by Network Operations Centre (NOC) teams and Security professionals to manage networks today and to keep the organization safe.

For over two decades now, NetFlow has been a trusted technology which provides the data needed to enable the performance management of medium to large environments.

Over the years, NetFlow analysis technology has evolved alongside the networks it helps optimize to provide information-rich analyses, detailed reporting and data-driven network management insights to IT departments.

From traffic accounting, to performance management and security forensics, NetFlow brings together both high-level and detailed insights by aggregating network data and exporting it to a flow collector for analysis. Using a push-model makes NetFlow less resource-intensive than other proprietary solutions as it places very little demand on network devices for the collection and analysis of data.

NetFlow gives NOCs the information they need for pervasive deep network visibility and flexible Predictive AI Baselining analytics, which substantially reduces management complexity. Performance and Security Specialists enjoy unmatched flexibility and scalability in their endeavors to keep systems safe, secure, reliable and performing at their peak.

Although the NetFlow protocol promises a great deal of detail that could be leveraged to the benefit of the NOC and Security teams, many NetFlow solutions to date have failed to provide the contextual depth and flexibility required to keep up with the evolving network and related systems. Many flow solutions simply cannot scale to archive the necessary amount of granular network traffic needed to gain the visibility required today. Due to the limited amount of usable data they can physically retain, these flow solutions are used for only basic performance traffic analysis or top talker detection and cannot physically scale to report on needed Predictive AI Baselining analytics making them only marginally more useful than an SNMP/RMON solution.

The newest generation of NetFlow tools must combine the granular capability of a real-time forensics engine with long-term capacity planning and data mining abilities.

Modern NetFlow applications should also be able to process the ever expanding vendor specific flexible NetFlow templates which can provide unique data points not found in any other technology.

Lastly, the system needs to offer machine-learning intelligent analysis which can detect and alert on security events happening in the network before the threat gets to the point that a human would notice what has happened.

When all of the above capabilities are available and put into production, a NetFlow system become an irreplaceable application in an IT department’s performance and security toolbox.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

Turbocharged Ransomware Detection using NetFlow

Your network has already, or soon will, be infiltrated

To win the war on cyber extortion, you must first have visibility into your network and it is imperative to have the intelligent context to be able to find threats inside your data

Ransomware has become the most prevalent Trojan but other Trojans such as Spyware, Adware, Scareware, Malware, Worms, Viruses, and Phishing all play a role in delivering Ransomware to your Network, Server, Laptop, Phone, or IoT device and can in their own right be damaging.

CySight hunts them all, but in this article, OUR FOCUS IS ON RANSOMWARE and how to try to identify it before it causes financial and social damages.

Given what we already know and that more is still being learned, it makes good sense to investigate our unique solution.

What Is the Impact of Ransomware?

It’s not just your home laptop at risk, entire Enterprises can and are being held at Ransom – e.g. NotPetya Ransomware attack on Maersk required full re-install of 4000 servers, which they announced resulted in a loss of $300 million.

The spread and popularity of Ransomware, which is up from $11.5B in 2019 to $20B in 2020 and still rising, is outgrowing legacy solutions that cannot identify zero-day infiltration, at-risk interconnected systems, and related data exfiltration.

Attacks are set to have huge growth in 2021 and beyond!

The Ransomware Protection Racket?

Ransomware can be like the old analog world protection racket :

  • You pay once, but they’ll come back later asking for more.
  • You might pay but never get your data back.
  • They could give you the decryption key to get your data back, but sell the key to other hackers along with your corporate secrets.
  • They understand the value of reputation and wanting to keep breaches private.
  • They’ll go after especially important and critical infrastructure and services.
  • It is not just the enterprise that is at risk, but the interconnected components like ISPs and BYO personal devices.
  • The bigger you are the more the hackers think they can get and will try to!

A single infection could cost an organization thousands of dollars. (or millions!)

Evolved “double extortion”

It is important to take cognizance of the rise of double extortion attacks as criminals have come to realize that encrypting your files and stipulating a ransom to get back access to your data may be mitigated by backup strategies.

Decryption keys are good for business!

Like any good Protection Racket, Ransomware criminals understand that in order to make money they need to establish a certain decorum. By ensuring a customer can get a key to decrypt after paying the ransom they build a level of “trust” that it just takes money to get your files back. Pricing is usually set at a level where the Ransomer feels they can extract payment at the level the ransomed can afford. This allows continuity and repeat business.

When not paid impact reputation.

Hackers have also become experts in the art of Doxing which means gathering sensitive information about a target company or person and threatening to make it public if their terms are not met.

This has been a strategy for some time but it is becoming more prevalent for an attacker to exfiltrate a copy of the data as well as encrypting them and in that way prevents access to your data as well as having to be able to leverage your sensitive information going well beyond the simple lock and key protection racket and taking extortionware to a whole new level which can create years of ongoing demands.

Infrastructure, key servers, critical services.

As Ransomware progresses it will continue to exploit weaknesses in Infrastructures. Often those most vulnerable are those who believe they have the visibility to detect.

Ransomware is a long game often requiring other trojans or delivery methods to slowly infiltrate corporations of all types. They sleep, waiting for the right time to be activated when you least expect it.

ISP / Corporate / Industrial / SMB / Person

There are literally hundreds of ransomware variants targeted to both huge and sensitive corporate or government infrastructures that activate and encrypt on botnet instruction or when a set of circumstances activates the algorithms. They make use of payment gateways that are almost impossible to break and track.

It’s not all doom and gloom if you catch it early, it makes good sense to investigate our unique solution.

Threat Hunting (Ransomware)

The Postmortem Snowball Impacts!

When hunting for Ransomware there is often a snowball-like effect in terms of effort and impact.

You might start looking to answer questions like where the Ransomware came from, who did it, when did it happen, is there a patch to protect in future etc.

But you need to know more detail than that to judge your response;

  • The nature and classification of the threats are vital to know. e.g. is it scareware with no real impact? are they just trying to sell lousy protection software? Or is it real criminal intent and your data will be gone?
  • How serious is the Damage are we talking about and how widely has the problem spread?
  • What’s the cost to operations?
  • If you don’t remediate or pay, what the less quantifiable but very important reputation impact to your business?
  • After the fact, what employee re-training is needed?

  • What is the mindset of your Security organization?
    • Do they have all the traditional enterprise security measures in place and are ‘Certain’ they know everything (this is the worst-case scenario.)
    • Or are they aware of their ‘Limited’ ability to find ransomware, but don’t have the time or tools to deal with it.
    • Do they rely on backups, updates, and patching, which are also good practices but insufficient?
    • What if the hackers encrypt your backup drives?
    • Is the organization deluded are they ‘aware’  or understand they are ‘blind’

Answering all these questions takes more and more time and costly manpower, especially if you lack the tools to effectively undertake such threat hunting.

IN THE CURRENT INFECTIOUS CLIMATE WE’VE ALL BECOME SO SENSITIZED TO THE FACT THAT THE TINY RANSOMWARE AND TROJANS THAT WE DON’T SEE CAN POSE THE BIGGEST THREATS AND INVISIBLE DANGERS!

Deep insight into the granular nature of how systems, people process, applications, and things have communicated and are communicating is critical. In our attempts to discover hidden threats we need to deploy granular tools to collect, monitor, and make known the invisible dangers that can have real-world impacts.

It often overlooked but it is not a secret that in even well-known tools have serious shortcomings and are limited in their ability to retain complete records. They don’t really land up providing the visibility of the blindspots they alluded to. In fact, we found that in medium to large networking environments over 95% of network and cyber visibility tools struggle to retain as little as 2% to 5% of all information collected and this results in completely missed diagnostics, severely misleading analytics that cause misdiagnosis and risk!

YOU DON’T KNOW WHAT YOU DON’T KNOW!

AND IT’S WHAT WE DON’T SEE THAT POSES THE BIGGEST THREATS AND INVISIBLE DANGERS!

You may be surprised to learn that most tools lack the REAL Visibility that could have prevented attacks on a network and its local and cloud-connected assets. There are some serious shortcomings in the base designs of other flow solutions that result in their inability to scale in retention. This is why smart analysts are realizing that Threat Intelligence and Flow Analytics today are all about having access to long-term granular intelligence.

From a forensics perspective, you would appreciate that you can only analyze the data you retain, and with large and growing network and cloud data flows most tools (regardless of their marketing claims) actually cannot scale in retention and choose to drop records in lieu of what they believe is salient data.

Funnel_Loss_Plus_Text
Imputed outcome data leads to misleading results and missing data causes high risk and loss!

 

Big Data is heavy to store and lift.

We have seen many engineers try to build scripts to try to attain the missing visibility and do a lot of heavy lifting and then finally come to the realization that no matter how much lifting you do that if the data is not retained then you simply cannot analyze it.

Don’t get me wrong, we love the multitude of point solutions in our market that each tries to address a specific need – and there are a lot of them. DDoS detectors, End-Point threat discovery, Performance management, Email phishing detectors, Deep Packet Inspectors, and more.

DPI is a great concept but It is well known that Deep Packet Inspection (DPI) solutions struggle to maintain both a heavy traffic load and information extraction. They force customers to choose one or the other.

Each of these tools in their own right has value but they are difficult and expensive to integrate, maintain and train.

The data sources are often the same so using the right tool and an integrated approach for flow data allows SecOps, NetOps to reduce the cost overheads of maintaining multiple products and multiplies the value of each component.

Smart analysts are realizing that combining Network and Cyber Intelligence using Flow management today with the capability to access long-term granular intelligence is a seriously powerful enabler and a real game-changer when detecting Ransomware and finding exfiltration and related at-risk systems.

So how exactly do you go about turbocharging your Flow and Cloud metadata?

Our approach with CySight focuses on solving Cyber and Network Visibility using granular Collection and Retention with machine learning and A.I.

CySight was designed from the ground up with specialized metadata collection and retention techniques thereby solving the issues of archiving huge flow feeds in the smallest footprint and the highest granularity available in the marketplace.

Network issues are broad and diverse and can occur from many points of entry, both external and internal. The network may be used to download or host illicit materials and leak intellectual property. Additionally, ransomware and other cyber-attacks continue to impact businesses. So you need both machine learning and End-Point threats to provide a complete view of risk.

The Idea of flow-based analytics is simple yet potentially the most powerful tool to find ransomware and other network and cloud issues. All the footprints of all communications are sent in the flow data and given the right tools you could retain all the evidence of an attack or infiltration or exfiltration.

However, not all flow analytic solutions are created equal and due to the inability to scale in retention the Netflow Ideal becomes unattainable. For a recently discovered Ransomware or Trojan, such as “Wannacry”, it is helpful to see if it’s been active in the past and when it started.

Another important aspect is having the context to be able to analyze all the related traffic to identify concurrent exfiltration of an organization’s Intellectual Property and to quantify and mediate the risk. Threat hunting for RANSOMWARE requires multi-focal analysis at a granular level that simply cannot be attained by sampling methods. It does little good to be alerted to a possible threat without having the detail to understand context and impact. The Hacker who has control of your system will likely install multiple back-doors on various interrelated systems so they can return when you are off guard.

CySight Turbocharges Flow and Cloud analytics for SecOps and NetOps

As with all CySight analytics and detection, you don’t have to do any heavy lifting. We do it all for you!

There is no need to create or maintain special groups with Ransomware or other endpoints of ill-repute. Every CySight instance is built to keep itself aware of new threats that are automatically downloaded in a secure pipe from our Threat Intelligence qualification engine that collects, collates, and categorizes threats from around the globe or from partner threat feeds.

CySight Identifies your systems conversing with Bad Actors and allows you to backtrack through historical data to see how long it’s been going on.

Summary

CySight software is capable of the highest level of granularity, scalability, and flexibility available in the network and cloud flow metadata market and supports the broadest range of flow-capable vendors and flow logs.CySight’s Intelligent Visibility, Dropless Collection, automation, and machine intelligence reduce the heavy lifting in alerting, auditing, and discovering your network making performance analytics, anomaly detection, threat intelligence, forensics, compliance, and IP accounting a breeze!

Let us help you today.

Benefits of Network Security Forensics

The networks that your business operates on are often open and complex.

Your IT department is responsible for mitigating network risks, managing performance and auditing data to ensure functionality.

Using NetFlow forensics can help your IT team maintain the competitiveness and reliability of the systems required to run your business.

In IT, network security forensics involves the monitoring and analysis of your network’s traffic to gather information, obtain legal evidence and detect network intrusions.

These activities help keep your company perform the following actions.

  • Adjust to increased data and NetFlow volumes
  • Identify heightened security vulnerabilities and threats
  • Align with corporate and legislative compliance requirements
  • Contain network costs
  • Analyze network performance demands
  • Recommend budget-friendly implementations and system upgrades

NetFlow forensics helps your company maintain accountability and trace usage; these functions become increasingly difficult as your network becomes more intricate.

The more systems your network relies on, the more difficult this process becomes.

While your company likely has standard security measures in place, e.g. firewalls, intrusion detection systems and sniffers, they lack the capability to record all network activity.

Tracking all your network activity in real-time at granular levels is critical to the success of your organization.

Until recently, the ability to perform this type of network forensics has been limited due to a lack of scalability.

Now, there are web-based solutions that can collect and store this data to assist your IT department with this daunting task.

Solution capabilities include:

  • Record NetFlow data at a micro level
  • Discover security breaches and alert system administrators in real-time
  • Identify trends and establish performance baselines
  • React to irregular traffic movements and applications
  • Better provisioning of network services

The ability to capture all of this activity will empower your IT department to provide more thorough analysis and take faster action to resolve system issues.

But, before your company can realize the full value of NetFlow forensics, your team needs to have a clear understanding of how to use this intelligence to take full advantage of these detailed investigative activities.

Gathering the data through automation is a relatively simple process once the required automation tools have been implemented.

Understanding how to organize these massive amounts of data into clear, concise and actionable findings is an additional skill set that must be developed within your IT team.

Having a team member, whether internal or via a third-party vendor, that can aggregate your findings and create visual representations that can be understood by non-technical team members is a necessary part of NetFlow forensics. It is important to stress the necessity of visualization; this technique makes it much easier to articulate the importance of findings.

In order to accurately and succinctly visualize security issues, your IT staff must have a deep understanding of the standard protocols of your network. Without this level of understanding, the ability to analyze and investigate security issues is limited, if not impossible.

Utilizing a software to support the audit functions required to perform NetFlow forensics will help your company support the IT staff in the gathering and tracking of these standard protocols.

Being able to identify, track and monitor the protocols in an automated manner will enhance your staff’s ability to understand and assess the impact of these protocols on network performance and security. It will also allow you to quickly assess the impact of changes driven by real-time monitoring of your network processes.

Sound like a daunting task?

It doesn’t have to be. Choose a partner to support your efforts and help you build the right NetFlow forensics configuration to support your business.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

End Point Threat Detection Using NetFlow Analytics

So, with that we’re going to get started. Again, we appreciate everyone taking the time today to listen to what we have to say and learn about our product, and learn about some of the new features. If you’re on here and you’re an existing customer, that you’ll learn a little bit about one of our new features. So, today we’re going to be talking a lot about security, that’s really the focus of this presentation. NetFlow in general, and CySight in particular can do a lot of things with the data that we have, and one of those things is really focused on being able to identify security threats to your network.

This is obviously very important, right? I mean you literally cannot go a day anymore without hearing of some company, some organization out there that’s been attacked or that has been infiltrated. I was reading about a hospital system recently that was held up by a Ransomware company, and actually had to pay money to unlock their files and this is not a home user, this is not a person who opened up the wrong email and their desktop got under attack or held for ransom. This is a legitimate hospital organization that had that happened to them and so, it really underscores the pervasiveness of these kinds of attacks.

Crawlers, botnets, Ransomware, they’re finding new ways to cause denial of service attacks and other kinds of attacks that can put your business or organization at an extremely high risk and, your network could be used to download or host illicit materials, leak intellectual property. That’s another thing that we’ve seen, this sort of cybercrime. Intellectual property cybercrime where it’s not that they’re just trying to bring down your site or bring down your network, but they’re actually trying to take intellectual property out and again, either hold it for ransom or just sell it or whatever it may be. So, this is certainly an important topic.

There are a number of major challenges for security teams to try and figure out what’s going on and how to lock down that network. The sophistication of the cybercrime organizations out there is just growing and growing. They’re always seemingly one step ahead of the for-profit companies that are trying to block them; the anti-virus companies, firewall companies and so forth. The growing complexity of the infrastructure is making it more difficult, there’s not a single point of entry and exit anymore. You’ve got BYOD, you’ve got lots of wireless, you’ve got VPNs, cloud-based services, you’ve got all kinds of things that people are using today. So it’s not just a lock it down at the firewall and we’re good, it’s really all over the place, and you need to be able to look at the traffic to understand what’s going on.

Of course, it’s very difficult or can be very difficult to retain and analyze that network transaction data across a big organization. Again, you have lots of lots of systems, lots of points of entry and exit, and it can be a challenge to really be able to collect all of that data and be able to use it. Because of that, because of the highly complicated and complex nature of networks, we’ve got this graphic here that talks about the really scary things that are out there. About do you know where things are happening? Do you…? You have certain aspects that you know and that you maybe know that you don’t know, but the really scary stuff is when you don’t know what you don’t know, right? It’s happening or could be happening and you have no idea, and you don’t even know that you should be looking at that, or could be looking at that data to try and understand what’s going on.

But in fact, products like ours and technologies like ours, allow you to, or allow a system to be watching for those unknown unknowns all the time. So, it’s not something that you wake up in the morning and say, “I’m going to go, look at this.” It’s actually happening in the background and looking for you. That machine learning capability is really what makes the new level of systems like ours trying… you know being able to catch up with the sophistication of the attack profiles out there.

When there is an attack or when there is a detection of something, then Incident Response Teams always have to look at that communications component, right? So, they’re going to look at hardware, they’re going to look at software, but they also have to look at the communications. They have to look at historical behavior, they have to look to see if there’s been data breaches, they have to look to see if there’s been internal threats.

There is a certain percentage, depending on who you talk to, 30%, 35%, 40% of data breaches happen from the inside out. So, these are internal employees who have access to something that they shouldn’t, and they email that out or they otherwise try to get that data out of the network. Of course, there’s the external threats from bad actors, those malicious types that are probing, probing, probing trying to find holes to get in and do whatever, the nefarious things that they’re trying to do.

So, being able to have some insight into the nature of how those systems, all of your systems communicate with each other and how they have communicated is critical. It’s really about being able to go from the blind area into a much more aware and certain area, right? So, do you really have… and thinking about, do you really have visibility in terms of what’s going on inside your network, because if you don’t, that can certainly hurt you.

The way we look at it, there’s the very basic things that virtually everybody has. Everybody has a firewall, most people have virus protection on their desktops. That sort of blocking and tackling, very basic prevention at the edge of a network is only a piece, right? It is not the most effective place anymore. You have to have it, we certainly wouldn’t tell you not to have it, but if you really want to move to a defense in depth, then it’s more than just trying to put up a blocking of things coming in. It’s being able to look at the live traffic and see what’s happening and identify if there are threats going on that got through. If something gets through the defenses that you have, how can you then further identify that it has happened and what’s going on? If you just think, “Well, I’ve got this firewall and I got my rules setup and I’m good, nothing can ever touch me,” and don’t look any further, then you’re really setting yourself up for a failure.

So, the way we approach the problem as a piece of this overall security landscape, is through the use of NetFlow information. So, NetFlow’s been around for a long time, it’s a quite a mature technology. But the great thing about it is, it’s continually even further maturing as we go on. What used to be sort of a traffic accounting product only, that was based on data coming from core routers and switches, has now been extended out to other systems in the network. Things like wireless LAN controllers, cloud servers, firewalls themselves. You can get the data from taps and probes that collect passively information about data traffic, and then turn that into a NetFlow export that can be sent to us that we can read.

Virtually every vendor… certainly every major vendor out there supports Flow in some way … Cisco of course is NetFlow and we use the term NetFlow to generically mean all of the various Flow types out there.  Jflow from Juniper, anything that’s IPFIX compatible as the standard, and some of the other kind of specialized versions of Flow, if you will. But all of them have the common theme that they’re going to look at that traffic and they’re going to be able to send that metadata to a collector like ours and then we can use that information intelligently to help both give you and allow you to report on and look deeply into the data, but also, and what we’re going to be talking about today, is really using that intelligence that’s built into the product to be able to identify threats, look at anomalies. Not just show you who your top talkers were, but actually say, “Hey, look. We’ve identified people that are communicating to known bad actors out there,” or, “We’ve seen an unusual bit of behavior in traffic between here and there, and this is something that really needs to be investigated.”

Talking about more of the specifics about how we do that. There’s two major pieces we’re going to be focusing on today. The first one is Anomaly Detection. Anomaly Detection for us means that we can baseline your network and the traffic on your network across a number of different dimensions. There’s actually quite a few metrics that we’re watching, some of the ones you could see below like flows, and packets, and bytes, and bits per second, packet size, it can be flags, it can be counts it can be all kinds of different metrics, and we can baseline each of them over time, across all of your interfaces or potentially even other aspects. So, it could be a specific conversation or a specific application, but at its most basic level through all of your interfaces to understand what is normal and what is normal activity for that time of day, that day of the week from those devices or whatever it may be.

Then of course, once we know what is normal, we can detect any activity that deviates from that normal baseline, right? This gives you a really great way of watching traffic 24/7 for things that you wouldn’t potentially pick up if you were just you know kind of eyeballing it if you will, or waiting certainly for someone to contact you and say there’s a problem. So, the statistical power of an application to be doing this behind the scenes and running all the time, and noticing things that you wouldn’t notice in the middle of the night, is incredibly useful for this sort of thing and then when we do detect an anomaly, we move into phase two as we call it, into diagnostics? So, diagnostics says, “Okay, there’s been some anomaly that has been detected, let’s look at this. Let’s figure out what’s going on here. We then kick off this diagnostic approach, which qualifies the cause and impact for each offending behavior breach. We’re looking it for KPIs that are specific to things like DOS attacks or scanners or sweepers or peer-to-peer activity. We roll all of that information up into a single ticket so to speak, for you on a screen that you can very easily look at and understand exactly what’s going on. When did it happen? Where did it happen? What was involved? What baseline was breached? What does that mean? What could that possibly be?

You can also do of course advance things like intelligent whitelisting. You can send the information out of our system up to another system that you may have, like an ITSM or trouble ticket system, via SNMP and via email and so forth. So, really this again this is the intelligent piece of the product with machine learning as its background. So it’s doing this whether you’re watching it or not. It’s looking for those baseline breaches and then when we see them, it’s really coordinating all of the information about what happened into a single easy-to-use place, which you can then drill down into using all of our standard features to try and identify other things that are happening or where do you need to go next.

Anomaly Detection or NBAD as you may hear us talk about it, has been in the product for a number of years now. So, that’s not something new, it’s continually being improved, and it’s a wonderful piece of the product, and it’s been there for a while.

The new thing that we have introduced and are introducing is what we call our Endpoint Threat Detection. So this is another module added onto the product that adds additional security capabilities while still utilizing all of the things that you typically utilize. So we’re still taking the data from NetFlow information but now we are applying to that information other outside data sources that we have, basically using some big data threat feeds collated from multiple sources that you can match up to or coordinate with the information about your traffic.

So, I’ve got information about my traffic, I’ve had that. Now, I’ve got information about what is bad in the world and in real time, where known bad actors, known bad IP addresses, Ransomware, malware, DDoS attacks, Tor and so forth are coming from and then looking at the two of them and saying, “Are any of my people talking to those things?” At the very most basic level that’s what we’re looking for, right? So, it’s things global in terms of getting all of these feeds and using pattern matching, and Anomaly Detection and so forth, and then it’s acting very local against the traffic that you have in your network.

This capability of having network connection logging or NetFlow, just as everybody in the industry agrees, is one of the best places that you can get this data. It’s almost impossible to get the kind of granular level of information from any other source. Especially if you are held to any sort of standard in terms of retention or policies around not being able to look directly into the data. If you’ve got compliance requirements that say, “Hey, I can’t store my customers’ data.” That is fine with NetFlow because NetFlow is not looking inside the packets; it’s looking at the metadata. Who’s talking to whom, and when are they doing it, and how much talking are they doing and so forth. But it’s not actually reading an e-mail or anything inside of that. So, you’re not going to run into a foul of any of those regulatory problems, but you’re still able to get a huge amount of benefit from a network investigation using that data.

It’s important that even without content, NetFlow provides an excellent means of guiding that investigation because there’s still so much data there. As it’s called in our world, metadata – Data about the data! There’s still so much information there. But what’s great also is that, you don’t have to retain content… unlike let’s say a probe or other type of system that is collecting every bit and byte. You run into problems there too, they’re expensive, and you run into storage requirements trying to store historically every conversation including the data, over a long period of time is just incredibly expensive and incredibly unwieldy to do. The amount of storage you have to have to be able to do that, and the difficulty in quickly and effectively retrieving that information and searching for things, just becomes next to impossible. But when you can still get the same benefit of what you need to look at from a security standpoint without those complications of price and just the logistics of handling it all, you end up with having a really valuable product and that’s what NetFlow can give to you.

So, with our Endpoint threat Detection, I’ve got a few screens here that can really dive down into what it looks like and how it works. Again, we’ve got these big data feeds of threat information out there in the world, collected from various sources, and honeypots and so forth and we’re continuously then monitoring for communications with those IPs of poor reputation. So, you’ve got your communication that we can see because of NetFlow, and you’ve got these known bad actors out there that we know about. We can match up those two pieces of information and when we do it, we’re not just saying it happened, but we’re giving you much more detail about it happening. So, if we kind of zoom in here a little bit, threat data can be seen in summary or in detail. We’ve got a categorization of what’s happening and different threat types. So, I can see this is a peer-to-peer kind of thing, is this known malware, is it Tor, is it an FTP or an SSH attacker? What kind of thing is happening from or on these known bad IP address?

So, from a high of macro level you can see what the threat categories are and what the threat types are and then of course, you can drill down using the standard CySight tools to investigate them and provide complete visibility into that threat. So, now I’ve seen it, I have traffic that’s been identified as a threat. I can use our drill down, right-click, or however you want to do it capability. In this case we’re showing a right-click on threat detection and saying show me the affected IP addresses. I want to know, let’s drill down and see in this case on Ransomware, command and control Ransomware what the infected IP addresses are and then you’re going to get into the individual affected IPs, the threat IP where it’s coming from and, how much traffic was done?

These are Ransomware-type attacks, and I can see this is happening in my network at this period of time and I can even then of course change the view to be a time view. When did this start? Has this been a long-lived thing that’s been going on over a period of time where it’s been sucking information out of my organization, or did this pop off and go away? And if it did, when did that happen? All of that kind of deep level investigation is something that you can get using all of the normal tools that we have. You can get this deep dive investigation of traffic for regular traffic. Not just malicious traffic, but just using our tool for what I’ll call normal traffic accounting. Who is talking to who and when, is all available to you and more now with the threat detection features.

So, we’re watching for those threats, we’ve identified them and then using all of the common things that you’re used to using if you’re already a customer of ours, being able to identify or drill down into that data and provide those reports when you want to see it.

Here’s another example: let’s look at threat-port usage over the last few hours. So, it’s may be a couple hour time frame and I can see specifically which ports, which protocols have been detected as potential threats. What kind of threats, of course again how much traffic did they use? How long has this gone on for, and so forth. So, you can in fact in this case, know that increasing Tor usage. That we’ve highlighted in yellow and green … but you can also notice it’s been this continual botnet chatter, this red line. It’s just been going on and on forever, and that’s obviously something that needs to be absolutely looked into. It might be very difficult to find this in any other way, it’s just ongoing background chatter that’s been happening. It may not spike to anything that’s incredibly large that would set off a threshold alert, or maybe not even set off an anomaly alert. But, we’ve identified this is being definitely an issue because it’s communicating to something that we know is bad out there.

Of course you have all of the common reporting type tools. So, you can automate those threats, I want a threat report every hour emailed to me, or every day, or whatever makes sense or a roll up report every month to provide to management to say, okay, over the last 30 days, here are all the threats that were identified as happening in our network, and then here’s what’s been remediated, here’s what we’ve blocked, here’s what we’ve stopped, here’s what we’ve fixed, here’s what we’ve cleaned up kind of thing and all of those reports that look good and can be scheduled in a great for both live use and for management, are part of and parcel of the product that we’ve been delivering for over a decade now.

As well as those deep dive threats forensics. So the high level reports are good for some people but the deep dive of course reports are important for other people and that’s something that we can give you because we store an archive all of this flow information, it’s not just the top 100, or the top 500, it’s the top 5,000 or 10,000 or every single Flow using our compliance version. The compliance version store has the ability to store all of those flows all the time for you to pull up and review may not have been yesterday, it may have been last week or last month or six months ago or whenever. You can still drill in, you can still see every individual flow in terms of IPs, source and destination and ports and protocols interfaces and all of that kind of information. It gives you that super granular capability that you’re just not going to find anywhere else.

We also try to give you different viewpoints; we’re very big on flexibility in terms of giving you an easy-to-understand way of looking at the traffic. Some people like to view numbers and other people like to view pictures, and there’s lots of ways that we can show that data to you. The visualization capability is outstanding within our product and one of the ways that that can be really useful. We’ve got this example here of a Tor correlation attack. So, it’s de-anonymizing Tor is a difficult but super important issue within the world of identifying Tor, and so for us, when we see that there has been Tor traffic we can build this visualization and we can see all the different places that that Tor traffic has hopped to within your network or in and out of your network and that really gives you a way to get in and say, “Okay, I need to look here, I need to stop at here, I need to stop at there.” From a service provider perspective, this can be a really, really useful example of what we can do in the power of our product.

So with the last few minutes here, I know we’re getting close to the time frame, but we do want to talk about the many options you have in terms of our scalable architecture. Whether you are small or mid-size organization, or very, very large organization, we have a way of delivering our product to you. It could be in a single standalone environment with a single database and single software installation, it could be as you grow and maybe you have various components of traffic that are disseminated globally, and you need local collection, we can do that. So, we can offer split off collectors or helper collectors that communicate up to a single master database or we can even do multi-site server, multi-database hierarchical architecture for really, really massively scaled organizations. So, no matter who you are, if you’re listening to this, if you’re just small organization with one site and a few devices, or a massively global corporation with thousands of devices and data traversing it in many different areas, we can fit your organization and we can architect a solution that is right for you.

We’ve got a number of exciting features one of the great things about us is that, we never stop developing and we never stop investigating what the best things are to add to the product. We’ve got some really cool enhancements coming on, all things that people have asked about or have inquired about, or we’ve decided to build on our own and we love talking to our customers.

Our best source of future development is request from our customers. So, anything that you can think of I can’t guarantee that that our team will do it, but I can certainly guarantee you that we’ll listen to you and we’ll think about it and we’ll do our absolute best to solve whatever issue you may have and because of our commitment to our customers and our willingness to listen to them, we really have built up a wonderful group of customers. You can see a few of their logos on the screen here again, everything from traditional organizations enterprises to service providers, educational institutions, Telco’s, whatever it may be, we can handle it and we’d love if you’re not already a customer of ours, but you’re listening to this webinar, certainly we’d love to have your logo on this list in the future and we feel like once you get to working with us and really get used to our product, you’re going to be super thrilled about how we do things. What we offer to you and the support we provide to you.

So, with that I think we’re at the end of the presentation, almost exactly right on time here, about 30 minutes. So, I want to thank everyone for taking the time to join today, as always it does not look like we have… I’m just looking. Does not look like we have any questions right now, so, if you do have any now would be the time to type them in. But if not, we just want to thank you for joining us today. This presentation has been recorded and will be available to any of the folks who registered, and it’ll eventually make it up into the website. So, please check it out. Also please check out our website for other information about future webinars or other documentation that we have, there’s a lot of good resources up there and we invite you to take a look at those and certainly if you have any questions to reach out to us either to the sales team or the support or engineering team depending on what you’re interested in.

So, with that, I’ll end the session and I look forward to speaking with all of you at some point in the future.

Thanks.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health