Archives

Category Archive for ‘Big Data’

Cyberwar Defense using Predictive AI Baselining

The world is bracing for a worldwide cyberwar as a result of the current political events. Cyberattacks can be carried out by governments and hackers in an effort to destabilize economies and undermine democracy. Rather than launching cyberattacks, state-funded cyber warfare teams have been studying vulnerabilities for years.

An important transition has occurred, and it is the emergence of bad actors from unfriendly countries that must be taken seriously. The most heinous criminals in this new cyberwarfare campaign are no longer hiding. Experts now believe that a country could conduct more sophisticated cyberattacks on national and commercial networks. Many countries are capable of conducting cyberattacks against other countries, and all parties appear to be prepared for cyber clashes.

So, how would cyberwarfare play out, and how can organizations defend against them?

The first step is to presume that your network has been penetrated or will be compromised soon, and that several attack routes will be employed to disrupt business continuity or vital infrastructure.

Denial-of-service (DoS/DDoS) attacks are capable of spreading widespread panic by overloading network infrastructures and network assets, rendering them inoperable, whether they are servers, communication lines, or other critical technologies in a region.

In 2021, ransomware became the most popular criminal tactic, but country cyber warfare teams in 2022 are now keen to use it for first strike, propaganda and military fundraising. It is only a matter of time before it escalates. Ransomware tactics are used in politically motivated attacks to encrypt computers and render them inoperable. Despite using publicly accessible ransomware code, this is now considered weaponized malware because there is little to no possibility that a key to decode will be released. Ransomware assaults by financially motivated criminals have a different objective, which must be identified before causing financial and social damage, as detailed in a recent RANSOMWARE PAPER

To win the cyberwar on either cyber extortion or cyberwarfare attacks, you must first have complete 360-degree view into your network and deep transparency and intelligent context to detect dangers within your data.

Given what we already know and the fact that more is continually being discovered, it makes sense to evaluate our one-of-a-kind integrated Predictive AI Baselining and Cyber Detection solution.

YOU DON’T KNOW WHAT YOU DON’T KNOW!

AND IT’S WHAT WE DON’T SEE THAT POSES THE BIGGEST THREATS AND INVISIBLE DANGERS!

You may be surprised to learn that most tools lack the REAL Visibility that could have prevented attacks on a network and its local and cloud-connected assets. There are some serious shortcomings in the base designs of other flow solutions that result in their inability to scale in retention.

This is why smart analysts are realizing that Threat Intelligence and Flow Analytics today are all about having access to long-term granular intelligence. From a forensics perspective, you would appreciate that you can only analyze the data you retain, and with large and growing network and cloud data flows most tools (regardless of their marketing claims) actually cannot scale in retention and choose to drop records in lieu of what they believe is salient data.

Imputed outcome data leads to misleading results and missing data causes high risk and loss!

Funnel_Loss_Plus_Text

So how exactly do you go about defending your organizations network and connected assets?

Our approach with CySight focuses on solving Cyber and Network Visibility using granular Collection and Retention with machine learning and A.I.

CySight was designed from the ground up with specialized metadata collection and retention techniques thereby solving the issues of archiving huge flow feeds in the smallest footprint and the highest granularity available in the marketplace.

Network issues are broad and diverse and can occur from many points of entry, both external and internal. The network may be used to download or host illicit materials and leak intellectual property.

Additionally, ransomware and other cyber-attacks continue to impact businesses. So you need both machine learning and End-Point threats to provide a complete view of risk.

The Idea of flow-based analytics is simple yet potentially the most powerful tool to find ransomware and other network and cloud issues. All the footprints of all communications are sent in the flow data and given the right tools you could retain all the evidence of an attack or infiltration or exfiltration.

However, not all flow analytic solutions are created equal and due to the inability to scale in retention the Netflow Ideal becomes unattainable. For a recently discovered Ransomware or Trojan, such as “Wannacry”, it is helpful to see if it’s been active in the past and when it started.

Another important aspect is having the context to be able to analyze all the related traffic to identify concurrent exfiltration of an organization’s Intellectual Property and to quantify and mediate the risk. Threat hunting for RANSOMWARE requires multi-focal analysis at a granular level that simply cannot be attained by sampling methods. It does little good to be alerted to a possible threat without having the detail to understand context and impact. The Hacker who has control of your system will likely install multiple back-doors on various interrelated systems so they can return when you are off guard.

CySight Turbocharges Flow and Cloud analytics for SecOps and NetOps

As with all CySight Predictive AI Baselining analytics and detection, you don’t have to do any heavy lifting. We do it all for you!

There is no need to create or maintain special groups with Ransomware or other endpoints of ill-repute. Every CySight instance is built to keep itself aware of new threats that are automatically downloaded in a secure pipe from our Threat Intelligence qualification engine that collects, collates, and categorizes threats from around the globe or from partner threat feeds.

CySight Identifies your systems conversing with Bad Actors and allows you to backtrack through historical data to see how long it’s been going on.

Summary

IdeaData’s CySight software is capable of the highest level of granularity, scalability, and flexibility available in the network and cloud flow metadata market and supports the broadest range of flow-capable vendors and flow logs.

CySight’s Predictive AI Baselining, Intelligent Visibility, Dropless Collection, automation, and machine intelligence reduce the heavy lifting in alerting, auditing, and discovering your network making threat intelligence, anomaly detection, forensics, compliance, performance analytics and IP accounting a breeze!

Let us help you today. Please schedule a time to meet https://calendly.com/cysight/

Advanced Predictive AI leveraging Granular Flow-Based Network Analytics.

IT’S WHAT YOU DON’T SEE THAT POSES THE BIGGEST THREATS AND INVISIBLE DANGERS.

Existing network management and network security point solutions are facing a major challenge due to the increasing complexity of the IT infrastructure.

The main issue is a lack of visibility into all aspects of physical network and cloud network usage, as well as increasing compliance, service level management, regulatory mandates, a rising level of sophistication in cybercrime, and increasing server virtualization.

With appropriate visibility and context, a variety of network issues can be resolved and handled by understanding the causes of network slowdowns and outages, detecting cyber-attacks and risky traffic, determining the origin and nature, and assessing the impact.

It’s clear that in today’s work-at-home, cyberwar, ransomware world, having adequate network visibility in an organization is critical, but defining how much visibility is considered “right” visibility is becoming more difficult, and more often than not even well-seasoned professionals make incorrect assumptions about the visibility they think they have. These misperceptions and malformed assumptions are much more common than you would expect and you would be forgiven for thinking you have everything under control.

When it comes to resolving IT incidents and security risks and assessing the business impact, every minute counts. The primary goal of Predictive AI Baselining coupled with deep contextual Network Forensics is to improve the visibility of Network Traffic by removing network blindspots and identifying the sources and causes of high-impact traffic.

Inadequate solutions (even the most well-known) mislead you into a false level of comfort but as they tend to only retain the top 2% or 5% of network communications frequently cause false positives and red herrings. Cyber threats can come from a variety of sources. These could be the result of new types of crawlers or botnets, infiltration and ultimately exfiltration that can destroy a business.

Networks are becoming more complex. Because of negligence, failing to update and patch security holes, many inadvertent threats can open the door to malicious outsiders. Your network could be used to download or host illegal materials, or it could be used entirely or partially to launch an attack. Ransomware attacks are still on the rise, and new ways to infiltrate organizations are being discovered. Denial of Service (DoS) and distributed denial of service (DDoS) attacks continue unabated, posing a significant risk to your organization. Insider threats can also occur as a result of internal hacking or a breach of trust, and your intellectual property may be slowly leaked as a result of negligence, hacking, or being compromised by disgruntled employees.

Whether you are buying a phone a laptop or a cyber security visibility solution the same rule applies and that is that marketers are out to get your hard-earned cash by flooding you with specifications and solutions whose abilities are radically overstated. Machine Learning  (ML) and Artificial Intelligence (AI) are two of the most recent to join the acronyms. The only thing you can know for sure dear cyber and network professional reader is that they hold a lot of promise.

One thing I can tell you from many years of experience in building flow analytics, threat intelligence, and cyber security detection solutions is that without adequate data your results become skewed and misleading. Machine Learning and AI enable high-speed detection and mitigation but without Granular Analytics (aka Big Data) you won’t know what you don’t know and neither will your AI!

In our current Covid world we have all come to appreciate, in some way, the importance of big data, ML and AI that if properly applied, just how quickly it can help mitigate a global health crisis. We only have to look back a few years when drug companies didn’t have access to granular data the severe impact that poor data had on people’s lives. Thalidomide is one example. In the same way, when cyber and network visibility solutions are only surface scraping data information will be incorrect and misleading and could seriously impact your network and the livelihoods of the people you work for and together with.

The Red Pill or The Blue Pill?

The concept of flow or packet-based analytics is straightforward, yet they have the potential to be the most powerful tools for detecting ransomware and other network and cloud-related concerns. All communications leave a trail in the flow data, and with the correct tools, you can recover all evidence of an assault, penetration, or exfiltration.

Not all analytic systems are made equal, and the flow/packet ideals become unattainable for other tools because of their difficulty to scale in retention. Even well-known tools have serious flaws and are limited in their ability to retain complete records, which is often overlooked. They don’t effectively provide the visibility of the blindspots they claimed.

As already pointed out, over 95% of network and deep packet inspection (DPI) solutions struggle to retain even 2% to 5% of all data captured in medium to large networks, resulting in completely missing diagnoses and delivering significantly misleading analytics that leads to misdiagnosis and risk!

It is critical to have the context and visibility necessary to assess all relevant traffic to discover concurrent intellectual property exfiltration and to quantify and mitigate the risk. It’s essential to determine whether a newly found Trojan or Ransomware has been active in the past and when it entered and what systems are still at risk.

Threat hunting demands multi-focal analysis at a granular level that sampling, and surface flow analytics methods just cannot provide. It is ineffective to be alerted to a potential threat without the context and consequence. The Hacker who has gained control of your system is likely to install many backdoors on various interconnected systems to re-enter when you are unaware. As Ransomware progresses it will continue to exploit weaknesses in Infrastructures.

Often those most vulnerable are those who believe they have the visibility to detect.

Network Matrix of Knowledge

Post-mortem analysis of incidents is required, as is the ability to analyze historical behaviors, investigate intrusion scenarios and potential data breaches, qualify internal threats from employee misuse, and quantify external threats from bad actors.

The ability to perform network forensics at a granular level enables an organization to discover issues and high-risk communications happening in real-time, or those that occur over a prolonged period such as data leaks. While standard security devices such as firewalls, intrusion detection systems, packet brokers or packet recorders may already be in place, they lack the ability to record and report on every network traffic transfer over the long term.

According to industry analysts, enterprise IT security necessitates a shift away from prevention-centric security strategies and toward information and end-user-centric security strategies focused on an infrastructure’s endpoints, as advanced targeted attacks are poised to render prevention-centric security strategies obsolete and today with Cyberwar a reality that will impact business and government alike.

As every incident response action in today’s connected world includes a communications component, using an integrated cyber and network intelligence approach provides a superior and cost-effective way to significantly reduce the Mean Time To Know (MTTK) for a wide range of network issues or risky traffic, reducing wasted effort and associated direct and indirect costs.

Understanding The shift towards Flow-Based Metadata

for Network and Cloud Cyber-Intelligence

  • The IT infrastructure is continually growing in complexity.
  • Deploying packet capture across an organization is costly and prohibitive especially when distributed or per segment.
  • “Blocking & tackling” (Prevention) has become the least effective measure.
  • Advanced targeted attacks are rendering prevention‑centric security strategies obsolete.
  • There is a Trend towards information and end‑user centric security strategies focused on an infrastructure’s end‑points.
  • Without making use of collective sharing of threat and attacker intelligence you will not be able to defend your business.

So what now?

If prevention isn’t working, what can IT still do about it?

  • In most cases, information must become the focal point for our information security strategies. IT can no longer control invasive controls on user’s devices or the services they utilize.

Is there a way for organizations to gain a clear picture of what transpired after a security breach?

  • Detailed monitoring and recording of interactions with content and systems. Predictive AI Baselining, Granular Forensics, Anomaly Detection and Threat Intelligence ability is needed to quickly identify what other users were targeted, what systems were potentially compromised and what information was exfiltrated.

How do you identify attacks without signature-based mechanisms?

  • Pervasive monitoring enables you to identify meaningful deviations from normal behavior to infer malicious intent. Nefarious traffic can be identified by correlating real-time threat feeds with current flows. Machine learning can be used to discover outliers and repeat offenders.

Summing up

Network security and network monitoring have gone a long way and jumped through all kinds of hoops to reach the point they have today. Unfortunately, through the years, cyber marketing has surpassed cyber solutions and we now have misconceptions that can do considerable damage to an organization.

The biggest threat is always the one you cannot see and hits you the hardest once it has established itself slowly and comfortably in a network undetected. Complete visibility can only be accessed through 100% collection and retention of all data traversing a network, otherwise even a single blindspot will affect the entire organization as if it were never protected to begin with. Just like a single weak link in a chain, cyber criminals will find the perfect access point for penetration.

Inadequate solutions that only retain the top 2% or 5% of network communications frequently cause false positives and red herrings. You need to have 100% access to your comms data for Full Visibility, but how can you be sure that you will?

You need free access to Full Visibility to unlock all your data and an Intelligent Predictive AI technology that can autonomously and quickly identify what’s not normal at both the macro and micro level of your network, cloud, servers, iot devices and other network connected assets.

Get complete visibility wiith CySight now –>>>

5 Ways Flow Based Network Monitoring Solutions Need to Scale

Partial Truth Only Results in Assumptions

A common gripe for Network Engineers is that their current network monitoring solution doesn’t provide the depth of information needed to quickly ascertain the true cause of a network issue. Imagine reading a book that is missing 4 out of every 6 words, understanding the context will be hopeless and the book has near to no value. Many already have over-complicated their monitoring systems and methodologies by continuously extending their capabilities with a plethora of add-ons or relying on disparate systems that often don’t interface very well with each other. There is also an often-mistaken belief that the network monitoring solutions that they have invested in will suddenly give them the depth they need to have the required visibility to manage complex networks.

A best-value approach to NDR, NTA and general network monitoring is to use a flow-based analytics methodology such as NetFlow, sFlow or IPFIX.

The Misconception & What Really Matters

In this market, it’s common for the industry to express a flow software’s scaling capability in flows-per-second. Using Flows-per-second as a guide to scalability is misleading as it is often used to hide a flow collector’s inability to archive flow data by overstating its collection capability and enables them to present a larger number considering they use seconds instead of minutes. It’s important to look not only at flows-per-second but to understand the picture created once all the elements are used together. Much like a painting of a detailed landscape, the finer the brush and the more colors used will ultimately provide the complete and truly detailed picture of what was being looked at when drawing the landscape.

Granularity is the prime factor to start focusing on, specifically referring to granularity retained per minute (flow retention rate). Naturally, speed impediment is a significant and critical factor to be aware of as well. The speed and flexibility of alerting, reporting, forensic depth, and diagnostics all play a strategic role but will be hampered when confronted with scalability limitations. Observing the behavior when impacted by high-flow-variance or sudden-bursts and considering the number of devices and interfaces can enable you to appreciate the absolute significance of scalability in producing actionable insights and analytics.  Not to mention the ability to retain short-term and historical collections, which provide vital trackback information, would be nonexistent. To provide the necessary visibility to accomplish the ever-growing number of tasks analysts and engineers must deal with daily along with resolving issues to completion, NDR, NTA and general Network Monitoring System (NMS) must have the ability to scale in all its levels of consumption and retention.

How Should Monitoring Solutions Scale?

Flow-Based Network Detection and Response (NDR) / Network Traffic Analysis (NTA) software needs to scale in its collection of data in five ways:

Ingestion Capability – Also referred to as Collection, means the number of flows that can be consumed by a single collector. This is a feat that most monitoring solutions are able to accomplish, unfortunately, it is also the one they pride themselves on. It is an important ability but is only the first step of several crucial capabilities that will determine the quality of insights and intelligence of a monitoring system. Ingestion is only the ability to take in data, it does not mean “retention”, and therefore could do very little on its own.

Digestion Capability – Also referred to as Retention, means the number of flow records that can be retained by a single collector. The most overlooked and difficult step in the network monitoring world. Digestion / Flow retention rates are particularly critical to quantify as they dictate the level of granularity that allows a flow-based NMS to deliver the visibility required to achieve quality Predictive AI Baselining, Anomaly Detection, Network Forensics, Root Cause Analysis, Billing Substantiation, Peering Analysis, and Data Retention compliance. Without retaining data, you cannot inspect it beyond the surface level, losing the value of network or cloud visibility.

Multitasking Processes– Pertains to the multitasking strength of a solution and its ability to scale and spread a load of collection processes across multiple CPUs on a single server.  This seems like an obvious approach to the collection but many systems have taken a linear serial approach to handle and ingest multiple streams of flow data that don’t allow their technologies to scale when new flow generating devices, interfaces, or endpoints are added forcing you to deploy multiple instances of a solution which becomes ineffective and expensive.

Clustered Collection – Refers to the ability of a flow-based solution to run a single data warehouse that takes its input from a cluster of collectors as a single unit as a means to load balance. In a large environment, you mostly have very large equipment that sends massive amounts of data to collectors. In order to handle all that data, you must distribute the load amongst a number of collectors in a cluster to multiple machines that make sense of it instead of a single machine that will be overloaded. This ability enables organizations to scale up in data use instead of dropping it as they attempt to collect it.

Hierarchical Correlation – The purpose of Hierarchical correlation is to take information from multiple databases and aggregate them into a single Super SIEM. With the need to consume and retain huge amounts of data, comes the need to manage and oversee that data in an intelligent way. Hierarchical correlation is designed to enable parallel analytics across distributed data warehouses to aggregate their results. In the field of network monitoring, getting overwhelmed with data to the point where you cannot find what you need is a as useful as being given all the books in the world and asked a single question that is answered in only one.

Network traffic visibility is considerably improved by reducing network blindspots and providing qualified sources and reasons of communications that impair business continuity.The capacity to capture flow at a finer level allows for new Predictive AI Baselining and Machine Learning application analysis and risk mitigation.

There are so many critical abilities that a network monitoring solution must enable its user, all are affected by whether or not the solution can scale.

Visibility is a range and not binary, you do not have or don’t have visibility, its whether you have enough to achieve your goals and keep your organization productive and safe.

How to Use a Network Behavior Analysis Tool to Your Advantage

How to Use a Network Behavior Analysis Tool to Your Advantage

Cybersecurity threats can come in many forms. They can easily slip through your network’s defenses if you let your guard down, even for a second. Protect your business by leveraging network behavior analysis (NBA). Implementing behavioral analysis tools helps organizations detect and stop suspicious activities within their networks before they happen and limit the damage if they do happen.

According to Accenture, improving network security is the top priority for most companies this 2021. In fact, the majority of them have increased their spending on network security by more than 25% in the past months. 

With that, here are some ways to use network behavior anomaly detection tools to your advantage.

1.     Leverage artificial intelligence

Nowadays, you can easily leverage artificial intelligence (AI) and machine learning (ML) in your network monitoring. In fact, various software systems utilize  AI diagnostics to enhance the detection of any anomalies within your network. Through its dynamic machine learning, it can quickly learn how to differentiate between normal and suspicious activities.

AI-powered NBA software can continuously adapt to new threats and discover outliers without much interference from you. This way, it can provide early warning on potential cyberattacks before they can get serious. This can include DDoS, Advanced Persistent Threats, and Anomalous traffic.

Hence, you should consider having AI diagnostics as one of your network behavior analysis magic quadrants.

2.           Take advantage of its automation

One of the biggest benefits of a network anomaly detection program is helping you save time and labor in detecting and resolving network issues. It is constantly watching your network, collecting data, and analyzing activities within it. It will then notify you and your network administrators of any threats or anomalies within your network.

Moreover, it can automatically mitigate some security threats from rogue applications to prevent sudden downtimes. It can also eliminate blind spots within your network security, fortifying your defenses and visibility. As a result, you or your administrators can qualify and detect network traffic passively.

3.           Utilize NBA data and analytics

As more businesses become data-driven, big data gains momentum. It can aid your marketing teams in designing better campaigns or your sales team in increasing your business’ revenues. And through network behavior analysis, you can deep-mine large volumes of data from day-to-day operations.

For security engineers, big data analytics has become an effective defense against network attacks and vulnerabilities. It can give them deeper visibility into increasingly complex and larger network systems. 

Today’s advanced analytics platforms are designed to handle and process larger volumes of data. Furthermore, these platforms can learn and evolve from such data, resulting in stronger network behavior analytics and local threat detection.

4.           Optimize network anomaly detection

A common issue with network monitoring solutions is their tendency to overburden network and security managers with false-positive readings. This is due to the lack of in-depth information to confirm the actual cause of a network issue. Hence, it is important to consistently optimize your network behavior analysis tool.

One way to do this is to use a flow-based analytics methodology for your network monitoring. You can do so with software like CySight, which uses artificial intelligence to analyze, segment, and learn from granular telemetry from your network infrastructure flows in real-time. It also enables you to configure and fine-tune your network behavior analysis for more accurate and in-depth monitoring.

5.           Integrate with other security solutions

Enhance your experience with your network behavior analytics tool by integrating it with your existing security solutions, such as prevention technology (IPS) systems, firewalls, and more. 

Through integrations, you can cross-analyze data between security tools for better visibility and more in-depth insights on your network safety. Having several security systems working together at once means one can detect or mitigate certain behaviors that are undetectable for the other. This also ensures you cover all the bases and leave no room for vulnerabilities in your network.

Improving network security

As your business strives towards total digital transformation, you need to start investing in your network security. Threats can come in many forms. And once it slips past your guard, it might just be too late.

Network behavior analysis can help fortify your network security. It constantly monitors your network and traffic and notifies you of any suspicious activities or changes. This way, you can immediately mitigate any potential issues before they can get out of hand. Check out CySight to know more about the benefits of network behavior analysis.

But, of course, a tool can only be as good as the people using it. Hence, you must make sure that you hire the right people for your network security team. Consider recruiting someone with an online software engineering masters to help you strengthen your network.


Ref: Accenture Report

How NetFlow Solves for Mandatory Data Retention Compliance

Compliance in IT is not new and laws regulating how organizations should manage their customer data exist such as: HIPPA, PCI, SCADA and Network transaction logging has begun to be required of business. Insurance companies are gearing up to qualify businesses by the information they retain to protect their services and customer information. Government and industry regulations and enforcement are becoming increasingly stringent.

Most recently many countries have begun to implement Mandatory Data Retention laws for telecom service providers.

Governments require a mandatory data retention scheme because more and more crime is moving online from the physical world and ISP‘s are keeping less data and retaining it for a shorter time. This negatively impacts the investigative capabilities of law enforcement and security agencies that need timely information to help save lives by early spotting lone-wolf terrorists or protect vulnerable members of society from abuse by sexual deviants, ransomware or other crimes online.

Although there is no doubt as to the value of mandatory data retention schemes they are not without justifiable privacy, human rights and expense concerns to implement.

It takes a lot of cash, time and skills that many ISP’s and companies simply cannot afford. Internet and managed service providers and large organizations must take proper precautions to remain in compliance. Heavy fines, license and certification issues and other penalties can result from non-compliance with mandatory data retention requirements.

According to the Australian Attorney-General’s Department, Australian telecommunications companies must keep a limited set of metadata for two years. Metadata is information about a communication (the who, when, where and how)—not the content or substance of a communication (the what).

A commentator from the Sydney morning herald qualified“…Security, intelligence and law enforcement access to metadata which overrides personal privacy is now in contention worldwide…” and speculated that with the introduction of Australian metadata laws that “…this country’s entire communications industry will be turned into a surveillance and monitoring arm of at least 21 agencies of executive government. …”.

In Australia many smaller ISP’s are fearful that failing to comply will send them out of business. Internet Australia’s Laurie Patton said, “It’s such a complicated and fundamentally flawed piece of legislation that there are hundreds of ISPs out there that are still struggling to understand what they’ve got to do”.

As for the anticipated costs, a survey sent to ISPs by telecommunications industry lobby group Communications Alliance found that  “There is a huge variance in estimates for the cost to business of implementing data retention – 58 per cent of ISPs say it will cost between $10,000 and $250,000; 24 per cent estimate it will cost over $250,000; 12 per cent think it will cost over $1,000,000; some estimates go as high as $10 million.”

An important cost to consider in compliance is the ease of data reporting when requested by government or corporate compliance teams to produce information for a specific ipv4 or ipv6 address. If the data is stored in a data-warehouse that is difficult to filter this may cause the service provider to incur penalties or be seen to be non-complying. Flexible filtering and automated reporting is therefore critical to produce the forensics required for the compliance in a timely and cost effective manner.

Although there are different laws governing different countries the main requirement of mandatory data retention laws at ISP’s is to maintain sufficient information at a granular level in order to assist governments in finding bad actors such as terrorists, corporate espionage, ransom-ware and pedophiles. In some countries this means that telcos are required to keep data of the IP addresses users connect to, for up to 10 weeks and in others just the totals of subscriber usage for each IP used for up to 2 years.

Although information remains local to each country and governed by relevant privacy laws, the benefits to law enforcement in the future will eventually provide the ability to have the visibility to track relayed data such as communications used by Tor Browsers, Onion routers and Freenet beyond their relay and exit nodes.

There is no doubt in my mind that with heightened states of security and increasing online crime there is a global need for governments to intervene with online surveillance to protect children from exploitation, reduce terrorism and to build defensible infrastructures whilst at the same time implementing data retention systems that have the inbuilt smarts to enable a balance between compliance and privacy rather than just a blanket catch all. There is already an available solution for the Internet communications component based on Netflow that assists ISP’s to quickly comply at a low cost whilst properly allowing data retention rules to be implemented to limit intruding on an individual’s privacy.

NetFlow solutions are cheap to deploy and are not required to be deployed at every interface such as a packet analyzer and can use the existing router, switch or firewall investment to provide continuous network monitoring across the enterprise, providing the service provider or organization with powerful tools for data retention compliance.

NetFlow technology if sufficiently scalable, granular and flexible can deliver on the visibility, accountability and measurability required for data retention because it can include features that:

  • Supply a real-time look at network and host-based activities down to the individual user and device;
  • Increase user accountability for introducing security risks that impact the entire network;
  • Track, measure and prioritize network risks to reduce Mean Time to Know (MTTK) and Mean Time to Repair or Resolve (MTTR);
  • Deliver the data IT staff needs to engage in in-depth forensic analysis related to security events and official requests;
  • Seamlessly extend network and security monitoring to virtual environments;
  • Assist IT departments in maintaining network up-time and performance, including mission critical applications and software necessary to business process integrity;
  • Assess and enhance the efficacy of traditional security controls already in place, including firewalls and intrusion detection systems;
  • Capture and archive flows for complete data retention compliance

Compared to other analysis solutions, NetFlow can fill in the gaps where other technologies cannot deliver. A well-architected NetFlow solution can provide a comprehensive landscape of tools to help business and service providers to achieve and maintain data retention compliance for a wide range of government and industry regulations.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

Scalable NetFlow – 3 Key Questions to Ask Your NetFlow Vendor

Why is flows per second a flawed way to measure a netflow collector’s capability?

Flows-per-second is often considered the primary yardstick to measure the capability of a netflow analyzers flow capture (aka collection) rate.

This seems simple on its face. The more flows-per-second that a flow collector can consume, the more visibility it provides, right? Well, yes and no.

The Basics

NetFlow was originally conceived as a means to provide network professionals the data to make sense of the traffic on their network without having to resort to expensive per segment based packet sniffing tools.

A flow record contains at minimum the basic information pertaining to a transfer of data through a router, switch, firewall, packet tap or other network gateway. A typical flow record will contain at minimum: Source IP, Destination IP, Source Port, Destination Port, Protocol, Tos, Ingress Interface and Egress Interface. Flow records are exported to a flow collector where they are ingested and information orientated to the engineers purposes are displayed.

Measurement

Measurement has always been how the IT industry expresses power and competency. However, a formula used to reflect power and ability changes when a technology design undergoes a paradigm shift.

For example, when expressing how fast a computer is we used to measure the CPU clock speed. We believed that the higher the clock speed the more powerful the computer. However, when multi-core chips were introduced the CPU power and speed dropped but the CPU in fact became more powerful. The primary clock speed measurement indicator became secondary to the ability to multi-thread.

The flows-per-second yardstick is misleading as it incorrectly reflects the actual power and capability of a flow collector to capture and process flow data and it has become prone to marketing exaggeration.

Flow Capture Rate

Flow capture rate ability is difficult to measure and to quantify a products scalability. There are various factors that can dramatically impact the ability to collect flows and to retain sufficient flows to perform higher-end diagnostics.

Its important to look not just at flows-per-second but at the granularity retained per minute (flow retention rate), the speed and flexibility of alerting, reporting, forensic depth and diagnostics and the scalability when impacted by high-flow-variance, sudden-bursts, number of devices and interfaces, the speed of reporting over time, the ability to retain short-term and historical collections and the confluence of these factors as it pertains to scalability of the software as a whole.

Scalable NetFlow and flow retention rates are particularly critical to determine as appropriate granularity is needed to deliver the visibility required to perform Anomaly Detection, Network Forensics, Root Cause Analysis, Billing substantiation, Peering Analysis and Data retention compliance.

The higher the flows-per-second and the flow-variance the more challenging it becomes to achieve a high flow-retention-rate to archive and retain flow records in a data warehouse.

A vendors capability statement might reflect a high flows-per-second consumption ability but many flow software tools have retention rate limitations by design.

It can mean that irrespective of achieving a high flow collection rate the netflow analyzer might only be capable of physically archiving 500 flows per minute. Furthermore, these flows are usually the result of sorting the flow data by top bytes to identify Top 10bandwidth abusers. Netflow products of this kind can be easily identified because they often tend to offer benefits orientated primarily to identifying bandwidth abuse or network performance monitoring.

Identifying bandwidth abusers is of course a very important benefit of a netflow analyzer. However, it has a marginal benefit today where a large amount of the abuse and risk is caused by many small flows.

These small flows usually fall beneath the radar screen of many netflow analysis products.  Many abuses like DDoS, p2p, botnets and hacker or insider data exfiltration continue to occur and can at minimum impact the networking equipment and user experience. Lack of ability to quantify and understand small flows creates great risk leaving organizations exposed.

Scalability

This inability to scale in short-term or historical analysis severely impacts a flow monitoring products ability to collect and retain critical information required in todays world where copious data has created severe network blind spots.

To qualify if a tool is really suitable for the purpose, you need to know more about the flows-per-second collection formula being provided by the vendor and some deeper investigation should be carried out to qualify the claims.

 

With this in mind here are 3 key questions to ask your NetFlow vendor to understand what their collection scalability claims really mean:

  1. How many flows can be collected per second?

  • Qualify if the flows per second rate provided is a burst rate or a sustained rate.
  • Ask how the collection and retention rates might be affected if the flows have high-flow variance (e.g. a DDoS attack).
  • How is the collection, archiving and reporting impacted when flow variance is increased by adding many devices and interfaces and distinct IPv4/IPv6 conversations and test what degradation in speed can you expect after it has been recording for some time.
  • Ask how the collection and retention rates might change if adding additional fields or measurements to the flow template (e.g. MPLS, MAC Address, URL, Latency)
  • How many flow records can be retained per minute?

  • Ask how the actual number of records inserted into the data warehouse per minute can be verified for short-term and historical collection.
  • Ask what happens to the flows that were not retained.
  • Ask what the flow retention logic is. (e.g. Top Bytes, First N)
  • What information granularity is retained for both short-term and historically?
    • Does the datas time granularity degrade as the data ages e.g. 1 day data retained per minute, 2 days retained per hour 5 days retained per quarter
    • Can you control the granularity and if so for how long?

 

Remember – Rate of collection does not translate to information retention.

Do you know whats really stored in the software’s database? After all you can only analyze what has been retained (either in memory or on disk) and it is that information retention granularity that provides a flow products benefits.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Big Data – A Global Approach To Local Threat Detection

From helping prevent loss of life in the event of a natural disaster, to aiding marketing teams in designing more targeted strategies to reach new customers, big data seems to be the chief talking point amongst a broad and diverse circle of professionals.

For Security Engineers, big data analytcs is proving to be an effective defense against evolving network intrusions thanks to the delivery of near real-time insights based on high volumes of diverse network data. This is largely thanks to technological advances that have resulted in the capacity to transmit, capture, store and analyze swathes of data through high-powered and relatively low-cost computing systems.

In this blog, we’ll take a look at how big data is bringing deeper visibility to security teams as environments increase in complexity and our reliance on pervading network systems intensifies.

Big data analysis is providing answers to the data deluge dilemma

Large environments generate gigabytes of raw user, application and device metrics by the minute, leaving security teams stranded in a deluge of data. Placing them further on the back foot is the need to sift through this data, which involves considerable resources that at best only provide a retrospective view on security breaches.

Big data offers a solution to the issue of “too much data too fast” through the rapid analysis of swathes of disparate metrics through advanced and evolving analytical platforms. The result is actionable security intelligence, based on comprehensive datasets, presented in an easy-to-consume format that not only provides historic views of network events, but enables security teams to better anticipate threats as they evolve.

In addition, big data’s ability to facilitate more accurate predictions on future events is a strong motivating factor for the adoption of the discipline within the context of information security.

Leveraging big data to build the secure networks of tomorrow

As new technologies arrive on the scene, they introduce businesses to new opportunities – and vulnerabilities. However, the application of Predictive AI Baselining analytics to network security in the context of the evolving network is helping to build the secure, stable and predictable networks of tomorrow. Detecting modern, more advanced threats requires big data capabilities from incumbent intrusion prevention and detection (IDS\IPS) solutions to distinguish normal traffic from potential threats.

By contextualizing diverse sets of data, Security Engineers can more effectively detect stealthily designed threats that traditional monitoring methodologies often fail to pick up. For example, Advanced Persistent Threats (APT) are notorious for their ability to go undetected by masking themselves as day-to-day network traffic. These low visibility attacks can occur over long periods of time and on separate devices, making them difficult to detect since no discernible patterns arise from their activities through the lens of traditional monitoring systems.

Big data Predictive AI Baselining analytics lifts the veil on threats that operate under the radar of traditional signature and log-based security solutions by contextualizing traffic and giving NOCs a deeper understanding of the data that traverses the wire.

Gartner states that, “Big data Predictive AI Baselining analytics enables enterprises to combine and correlate external and internal information to see a bigger picture of threats against their enterprises.”  It also eliminates the siloed approach to security monitoring by converging network traffic and organizing it in a central data repository for analysis; resulting in much needed granularity for effective intrusion detection, prevention and security forensics.

In addition, Predictive AI Baselining analytics eliminates barriers to internal collaborations between Network, Security and Performance Engineers by further contextualizing network data that traditionally acted as separate pieces of a very large puzzle.

So is big data Predictive AI Baselining analytics the future of network monitoring?

In a way, NOC teams have been using big data long before the discipline went mainstream. Large networks have always produced high volumes of data at high speeds – only now, that influx has intensified exponentially.

Thankfully, with the rapid evolution of computing power at relatively low cost, the possibilities of what our data can tell us about our networks are becoming more apparent.

The timing couldn’t have been more appropriate since traditional perimeter-based IDS\IPS no longer meet the demands of modern networks that span vast geographical areas with multiple entry points.

In the age of cloud, mobility, ubiquitous Internet and the ever-expanding enterprise environment, big data capabilities will and should become an intrinsic part of virtually every security apparatus.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

NetFlow for Usage-Based Billing and Peering Analysis

Usagebased billing refers to the methods of calculating and passing back the costs of running a network to the consumers of data that occur through the network. Both Internet Service Providers (ISP) and Corporations have a need for Usage-based billing with different billing models.

NetFlow is the ideal technology for usage-based billing because it allows for the capture of all transactional information pertaining to the usage and some smart NetFlow technologies already exist to assist in the counting, allocation, and substantiation of data usage.

Advances in telecommunication technology have enabled ISPs to offer more convenient, streamlined billing options to customers based on bandwidth usage.

One billing model used most commonly by ISPs in the USA is known as the 95th percentile. The ISP filters the samples and disregards the highest 5% in order to establish the bill amount. This is an advantage to data consumers who have bursts of traffic because they’re not financially penalized for exceeding a traffic threshold for brief periods of time. The solution measures traffic employing a five-minute granularity standard typically over the course of a month.

The disadvantage of the 95th percentile model is that its not sustainable business model as data continues to become a utility like electricity.

A second approach is a utility-based metered billing model that involves retaining a tally of all bytes consumed by a customer with some knowledge of data path to allow for premium or free traffic plans.

Metered Internet usage is used in countries like Australia and most recently Canada who have nationally moved away from a 95th percentile model. This approach is also very popular in corporations whose business units share common network infrastructure and who are unwilling to accept “per user” cost, but rather a real consumption-based cost.

Benefits of usage-based billing are:

  • Improved transparency about the cost of services;
  • Costs feedback to the originator;
  • Raised cost sensitivity;
  • Good basis for active cost management;
  • The basis for Internal and external benchmarking;
  • Clear substantiation to increase bandwidth costs;
  • Shared infrastructure costs can also be based on consumption;
  • Network performance improvements.

For corporations, usage-based billing enables the IT department to become a shared service and viewed as a profit center rather than a cost center. It can become viewed as something that’s a benefit and a catalyst for business growth rather than a necessary but expensive line item in the budget.

For ISPs in the USA, there is no doubt that utility-based costs per byte model will continually be contentious as video and TV over Internet usage increases. In other regions, new business models that include packaging of video over “free zones” services have become popular meaning that the cost of premium content provision has fallen onto the content provider making utility billing viable in the USA.

NetFlow tools can include methods for building billing reports and offer a variety of usage-based billing model calculations.

Some NetFlow tools even include an API to allow the chart-of-accounts to be retained and driven from traditional accounting systems using the NetFlow system to focus on the tallying. Grouping algorithms should be flexible within the solution to allow for grouping of all different variables such as interfaces, applications, Quality of Service (QoS), MAC Addresses, MPLS, and IP groups. For ISPs and large corporations Asynchronous Network Numbers (ASN) also allow for analysis of data-paths allowing sensible negotiations with Peering partners and Content partners.

Look out for more discussion on peering in an upcoming blog…

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

How to counter-punch botnets, viruses, ToR & more with Netflow [Pt 1]

You can’t secure what you can’t see and you don’t know what you don’t know.

Many network and security professionals assume that they can simply analyze data captured using their standard security devices like firewalls and intrusion detection systems, however they quickly discover their limitations as these devices are not designed for and cannot record and report on every transaction due to lack of granularity, scalability and historic data retention. Network devices like routers, switches, Wi-Fi or VMware servers also typically lack any sophisticated anti-virus software.

Presenting information in a manner that quickly enables security teams to act with simple views with deep contextual data supporting the summaries is the mark of a well constructed traffic analyzer ensuring teams are not bogged down by the detail unless required and even then allowing elegant means to extract forensics with simple but powerful visuals to enable quick contextual grasp and impact of a security event.

Using NetFlow Correlation to Detect intrusions  

Host Reputation is one of the best detection methods that can be used against Advanced Persistent Threats. There are many data sources to choose from and some are more comprehensive than others.

Today these blacklists are mostly IPv4 and Domain orientated designed to be used primarily by firewalls, network intrusion systems and antivirus software.

They can also be used in NetFlow systems very successfully as long as the selected flow technology can scale to support the thousands of known compromised end-points, the ability to frequently update the threat data and the ability to record the full detail of every compromised flow and subsequent conversations that communicate with the compromised systems to discover other related breaches that may have occurred or are occurring.

According to Mike Schiffman at Cisco,

“If a given IP address is known to be that of a spammer or a part of a botnet army it can be flagged in one of the ill repute databases … Since these databases are all keyed on IP address, NetFlow data can be correlated against them and subsequent malicious traffic patterns can be observed, blocked, or flagged for further action. This is NetFlow Correlation.“

The kind of data can we expect to find in the reputation databases are IP addresses that have known to be acting in some malicious or negative manner such as being seen by multiple global honeypots. Some have been identified to be part of a well-known botnet such as Palevo or Zeus whilst other IP’s are known to have been distributing Malware or Trojans. Many kinds of lists can be useful to correlate such as known ToR end points or Relays that have become particularly risky of late being a common means to introduce RansomWare and should certainly not be seen conversing to any host within a corporate, government or other sensitive environment.

Using a tool like CySight’s advanced End-Point Threat Detection allows NetFlow data to be correlated against hundreds of thousands of IP addresses of questionable reputation including ToR exits and relays in real-time with comprehensive historical forensics that can be deployed in a massively parallel architecture.

As a trusted source of deep network insights built on big data analysis capabilities, Netflow provides NOCs with an end-to-end security and performance monitoring and management solution. For more information on Netflow as a performance and security solution for large-scale environments, download our free Guide to Understanding Netflow.

Cutting-edge and innovative technologies like CySight delivers the deep end-to-end network visibility and security context required assisting in speedily impeding harmful attacks.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

NetFlow for Advanced Threat Detection

These networks are vital assets to the business and require absolute protection against unauthorized access, malicious programs, and degradation of performance of the network. It is no longer enough to only use Anti-Virus applications.

By the time malware is detected and those signatures added to the antiviral definitions, access is obtained and havoc wreaked or the malware is buried itself inside the network and is obtaining data and passwords for later exploitation.

An article by Drew Robb in eSecurity Planet on September 3, 2015 (https://www.esecurityplanet.com/network-security/advanced-threat-detection-buying-guide-1.html) cited the Verizon 2015 Data Breach Investigations Report where 70 respondents reported over 80,000 security incidents which led to more than 2000 serious breaches in one year.

The report noted that phishing is commonly used to gain access and the malware  then accumulates passwords and account numbers and learns the security defenses before launching an attack.  A telling remark was made, “It is abundantly clear that traditional security solutions are increasingly ineffectual and that vendor assurances are often empty promises,” said Charles King, an analyst at Pund-IT. “Passive security practices like setting and maintaining defensive security perimeters simply don’t work against highly aggressive and adaptable threat sources, including criminal organizations and rogue states.”

So what can businesses do to protect themselves? How can they be proactive in addition to the passive perimeter defenses?

The very first line of defense is better education of users. In one test, an e-mail message was sent to the users, purportedly from the IT department, asking for their passwords in order to “upgrade security.” While 52 people asked the IT department if this was a real request, 110 mailed their passwords right back. In their attempts to be productive, over half of the recipients of phishing e-mails responded within an hour!

Another method of advanced threat protection is NetFlow Monitoring.

IT department and Managed service providers (MSP’s), can use monitoring capabilities to detect, prevent, and report adverse effects on the network.

Traffic monitoring, for example, watches the flow of information and data traversing critical nodes and network links. Without using intrusive probes, this information helps decipher how applications are using the network and which ones are becoming bandwidth hogs. These are then investigated further to determine what is causing the problem and how best to manage the issue. Just adding more bandwidth is not the answer!

IT departments review this data to investigate which personnel are the power users of which applications, when the peak traffic times are and why, and similar information in addition to flagging and diving in-depth to review anomalies that indicate a potential problem.

If there are critical applications or services that the clients rely on for key account revenue streams, IT can provide real-time monitoring and display of the health of the networks supporting those applications and services. It is this ability to observe, analyze, and report on the network health and patterns of usage that provides the ability to make better decisions at the speed of business that CIO’s crave.

CySight excels at network Predictive AI Baselining analytics solutions. It scales to collect, analyze, and report on Netflow datastreams of over one million flows/second. Their team of specialists have prepped, installed, and deployed over 1000 CySight performance monitoring solutions, including over 50 Fortune 1000 companies and some of the largest ISP/Telco’s in the world. A global leader and recognized by winning awards for Security and Business Intelligence at the World Congress of IT, CySight is also welcomed by Cisco as a Technology Development Partner.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Balancing Granularity Against Network Security Forensics

With the pace at which the social, mobile, analytics and cloud (SMAC) stack is evolving, IT departments must quickly adopt their security monitoring and prevention strategies to match the ever-changing networking landscape. By the same token, network monitoring solutions (NMS) developers must balance a tightrope of their own in terms of providing the detail and visibility their users need, without a cost to network performance. But much of security forensics depends on the ability to drill down into both live and historic data to identify how intrusions and attacks occur. This leads to the question: what is the right balance between collecting enough data to gain the front foot in network security management, and ensuring performance isn’t compromised in the process?

Effectively identifying trends will largely depend on the data you collect

Trend and pattern data tell Security Operations Center (SOC) staff much about their environments by allowing them to connect the dots in terms of how systems may have become compromised. However, collecting large portions of historic data requires the capacity to house it – something that can quickly become problematic for IT Departments. Netflow data analysis acts as a powerful counterweight to the problem of processing and storing chunks of data, since it collects compressed header information that is far less resource-intensive than entire packets or investigating entire device log files, for example. Also, log files are often hackers’ first victims by way of deletion or corruption as a means to disguise attacks or intrusions. With CySight’s ability to collect vast quantities of uncompromised transaction data without exhausting device resources, SOCs are able to perform detailed analyses on flow information that could reveal security issues such as data leaks that occur over time. Taking into account that Netflow security monitoring can easily be configured on most devices, and pervasive security monitoring becomes relatively easy to configure in large environments.

Netflow security monitoring can give SOCs real-time security metrics

Netflow, when retained at high granularity, can facilitate seamless detection of traffic anomalies as they occur and when coupled with smart network behavior anomaly detection (NBAD), can alert engineers when data traverses the wire in an abnormal way – allowing for both quick detection and containment of compromised devices or entire segments. Network intrusions are typically detected when data traverses the environment in an unusual way and compromised devices experience spikes in multiple network telemetry metrics. As malicious software attempts to siphon information from systems, the resultant increase in out-of-the-norm activity will trigger warnings that can bring SOC teams in the loop of what is happening. CySight employs machine learning that continuously compares multi-metric baselines against current network activity and quickly picks up on anomalies overlooked by other flow solutions, even before they constitute a system-wide threat. This type of behavioral analysis of network traffic places security teams on the front foot in the ongoing battle against malicious attacks on their systems.

Network metrics are being generated on a big data scale

Few things can undermine a network’s performance and risk more than a monitoring solution that strains to provide anticipated visibility. However, considering the increasing complexity of distributed connected assets and the ways and speed in which people and IoT devices are being plugged into networks today, pervasive and detailed monitoring is absolutely crucial. Take the bring your own device (BYOD) phenomenon and the shift to the cloud, for example. Networking and security teams need visibility into where, when, and how mobile phones, tablets, smart watches, and IoT devices are going on and offline and how to better manage the flow of data to and from user devices. Mobile devices increasingly run their own versions of business applications and with BYOD cultures somewhat undermining IT’s ability to dictate the type of software allowed to run on personal devices, the need to monitor traffic flow from such devices – from both a security and a performance perspective – becomes clear.

General Netflow performance analytics tools are capable of informing NOC teams about how large IP traffic flows between devices, with basic usage statistics on a device or segment level. However, when network metrics are generated on a big data scale, traffic anomalies that require SOC investigation get lost in leaky bucket sorting algorithms of basic tools. Detecting the real underlying reasons for traffic degradation or identifying risky communications such as Ransomware, DDoS, slowDoS, peer-to-peer (p2p), the dark web (ToR), and having complete historical visibility to trackback undesirable applications become absolutely critical, but far less difficult, with CySight’s ability to easily provide information on all of the traffic that traverses the environment.

NetFlow security monitoring evolves alongside technology organically

Thanks to Netflow and the unique design and multi-metric approach that CySight has implemented, as systems evolve at an increasing rate, it doesn’t mean you need to re-invent your security apparatus every six months or so. CySight’s ubiquity, reliability, and flexibility give NOC and SOC teams deep visibility minus the administrative overheads in getting it up and running along with collecting and benefiting from big flow data’s deep insights. You can even fine-tune your monitoring to give you the right granularity you need to keep your systems safe, secure, and predictable. This results in fewer network blind spots that often act as the Achilles Heel of the modern security and network experts.

On the other end of the scale, Netflow analyzers – in their varying feature sets – give NOCs some basic ability to collect, analyze, and detect from within-the-top bandwidth metrics which some engineers may still believe is the most pertinent to their needs. Once you’ve decided on the data you need today whilst keeping an eye on what you need tomorrow, it’s now time to choose the collector that does the job best.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

What is NetFlow & How Can Organizations Leverage It?

NetFlow is a feature originally introduced on Cisco devices (but now generally available on many vendor devices) which provides the ability for an organization to monitor and collect IP network traffic entering or exiting an interface.
Through analysis of the data provided by NetFlow, a network administrator is able to detect things such as the source and destination of traffic, class of service, and the causes of congestion on the network.

NetFlow is designed to be utilized either from the software built into a router/switch or from external probes.

The purpose of NetFlow is to provide an organization with information about network traffic flow, both into and out of the device, by analyzing the first packet of a flow and using that packet as the standard for the rest of the flow. It has two variants which are designed to allow for more flexibility when it comes to implementing NetFlow on a network.

NetFlow was originally developed by Cisco around 1990 as a packet switching technology for Cisco routers and implemented in IOS 11.x.

The concept was that instead of having to inspect each packet in a “flow”, the device need only to inspect the first packet and create a “NetFlow switching record” or alternatively named “route cache record”.

After that that record was created, further packets in the same flow would not need to be inspected; they could just be forwarded based on the determination from the first packet. While this idea was forward thinking, it had many drawbacks which made it unsuitable for larger internet backbone routers.

In the end, Cisco abandoned that form of traffic routing in favor of “Cisco Express Forwarding”.

However, Cisco (and others) realized that by collecting and storing / forwarding that “flow data” they could offer insight into the traffic that was traversing the device interfaces.

At the time, the only way to see any information about what IP addresses or application ports were “inside” the traffic was to deploy packet sniffing systems which would sit inline (or connected to SPAN/Mirror) ports and “sniff” the traffic.  This can be an expensive and sometimes difficult solution to deploy.

Instead, by exporting the NetFlow data to an application which could store / process / display the information, network managers could now see many of the key meta-data aspects of traffic without having to deploy the “sniffer” probes.

Routers and switches which are NetFlow-capable are able to collect the IP traffic statistics at all interfaces on which NetFlow is enabled. This information is then exported as NetFlow records to a NetFlow collector, which is typically a server doing the traffic analysis.

There are two main NetFlow variants: Security Event Logging and Standalone Probe-Based Monitoring.

Security Event Logging was introduced on the Cisco ASA 5580 products and utilizes NetFlow v9 fields and templates. It delivers security telemetry in high performance environments and offers the same level of detail in logged events as syslog.

Standalone Probe-Based Monitoring is an alternative to flow collection from routers and switches and uses NetFlow probes, allowing NetFlow to overcome some of the limitations of router-based monitoring. Dedicated probes allow for easier implementation of NetFlow monitoring, but probes must be placed at each link to be observed and probes will not report separate input and output as a router will.

An organization or company may implement NetFlow by utilizing a NetFlow-capable device. However, they may wish to use one of the variants for a more flexible experience.

By using NetFlow, an organization will have insight into the traffic on its network, which may be used to find sources of congestion and improve network traffic flow so that the network is utilized to its full capability.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Why NetFlow is Perfect for Forensics and Compliance

Netflow forensic investigations can produce the report evidence that can be used in court as it describes the movement of the traffic data even without necessarily describing its contents.

It’s therefore crucial that the Netflow solution deployed can scale in archival to allow full context of all the flow data and not just the top of the data or the data relating to one tools idea of a security event.

The issue with Forensics and flow data is that in order to achieve full compliance its necessary to retain a data warehouse that can eventuate in a huge amount of flow records.

These records, retained in the data warehouse may not seem important at the time of collection but become critical to uncover behavior that may have been occurring over a long period and to ascertain the damage of the traffic behavior. I am talking broadly here as there are so many different instances where the data suddenly becomes critically important and it’s hard to do it justice by explaining one or two case studies. Remember you don’t know what you don’t know but when you discover what you didn’t know you need to have the ability to quantify the loss or the risk of loss.

How much flow data is enough to retain to satisfy compliance?

From our experience it is usually between 3-24 months depending on the size of the environment and the legal compliance relating to data protection or data retention. For most corporates we would recommend 12 months as a best practice. Data retention in ISP land in some countries requires the ability to analyze traffic for up to 2 years. Fortunately disk today is cheap and flow is cost effective to deploy across the organization. There is more information about this in our Performance and Security eBook.

Once a security issue has been identified the flow database can be available to quantify exactly what IP’s accessed a system, the times the system was accessed as well as quantifying the impact on dependent systems that the host conversed with directly or indirectly on the network before and after the issue.

Trawling through huge collection of flow-data can be a lengthy task and its necessary to have the ability to run automated Predictive AI Baselining analytics and parallel Predictive AI Baselining analytics to gauge damage from a long term inside threat that could have been dribbling out your intellectual property slowly over a few months.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

Identifying ToR threats without De-Anonymizing

Part 3 in our series on How to counter-punch botnets, viruses, ToR and more with Netflow focuses on ToR threats to the enterprise.

ToR (aka Onion routing) and anonymized p2p relay services such as Freenet is where we can expect to see many more attacks as well as malevolent actors who are out to deny your service or steal your valuable data. Its useful to recognize that flow Predictive AI Baselining analytics provides the best and cheapest means of de-anonymizing or profiling this traffic.

“The biggest threat to the Tor network, which exists by design, is its vulnerability to traffic confirmation or correlation attacks. This means that if an attacker gains control over many entry and exit relays, they can perform statistical traffic analysis to determine which users visited which websites.” (source)

According to a paper entitled “On the Effectiveness of Traffic Analysis Against Anonymity Networks Using Flow Records” by Sambuddho Chakravarty, Marco V. Barbera,, Georgios Portokalidis, Michalis Polychronakis, and Angelos D. Keromytis they point out that in the lab they can qualify that “81 Percent of Tor Users Can Be Hacked with Traffic Analysis Attack”.

It continues to be a cat and mouse game that requires both new innovative approaches to find ToR weaknesses coupled with correlation attacks to identify routing paths. To do this in real life is becoming much simpler but the real challenge is that it requires cooperation and coordination of business, ISPs and governments. The deployment of cheap and easy to deploy micro-taps that can act both as a ToR relay and a flow exporter concurrently combined with a NetFlow toolset that can scale hierarchically to analyze flow data with path analysis at each point in parallel across a multitude of ToR relays can make this task easy and cost effective.

So what can we do about ToR today?

Even without de-anonymizing ToR traffic there is a lot of intelligence that can be gained simply by analyzing ToR Exit and relay behavior. Using a flow tool that can change perspectives between flows, packets, bytes, counts or tcp flag counts allows you to qualify if a ToR node is being used to download masses of data or is trickling out data.

Patterns of data can be very telling as to what is the nature of the data transfer and can be used in conjunction with other information to become a useful indicator of the risk. As for supposedly secured networks I can’t think of any instance where ToR/Onion routing or for that matter any external VPN or Proxy service is needed to be used from within what is supposed to be a locked environment. Once ToR traffic has been identified communicating in a sensitive environment it is essential to immediately investigate and stop the IP addresses engaging in this suspicious behavior.

Using a tool like CySight’s advanced End-Point Threat Detection allows NetFlow data to be correlated against hundreds of thousands of IP addresses of questionable reputation including ToR exits and relays in real-time with comprehensive historical forensics that can be deployed in a massively parallel architecture.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

How Traffic Accounting Keeps You One Step Ahead Of The Competition

IT has steadily evolved from a service and operational delivery mechanism to a strategic business investment. Suffice it to say that the business world and technology have become so intertwined that it’s unsurprising many leading companies within their respective industries attribute their success largely to their adoptive stance toward innovation.

Network Managers know that much of their company’s ability to outmaneuver the competition depends to a large extent on IT Ops’ ability to deliver world-class services. This brings traffic accounting into the conversation, since a realistic and measured view of your current and future traffic flows is central to building an environment in which all the facets involved in its growth, stability and performance are continually addressed.

In this blog, we’ll take a look at how traffic accounting places your network operations center (NOC) team on the front-foot in their objective to optimize the flow of your business’ most precious cargo – its data.

All roads lead to performance baselining 

Performance baselines lay the foundation for network-wide traffic accounting against predetermined environment thresholds. They also aid IT Ops teams in planning for network growth and expansion undertakings. Baseline information typically contains statistics on network utilization, traffic components, conversation and address statistics, packet information and key device metrics.

It serves as your network’s barometer by informing you when anomalies such as excessive bandwidth consumption and other causes of bottlenecks occur. For example, root causes to performance issues can easily creep into an environment unnoticed: such as a recent update to a business critical application that may cause significant spikes in network utilization.  Armed with a comprehensive set of baseline statistics and data that allows Network Performance and Security Specialists to measure, compare and analyze network metrics,   root causes such as these can be identified with elevated efficiency.

In broader applications, baselining gives Network Engineers a high-level view of their environments, thereby allowing them to configure Quality of Service (QoS) parameters, plan for upgrades and expansions, detect and monitor trends and peering analysis and a bevy of other functions.

Traffic accounting brings your future network into focus

With new-generation technologies such as the cloud, resource virtualization, as a service platforms and mobility revolutionizing the networks of yesteryear, capacity planning has taken on a new level of significance. Network monitoring systems (NMS) need to meet the demands of the new, complex, hybrid systems that are the order of the day. Thankfully, technologies such as NetFlow have evolved steadily over the years to address the monitoring demands of modern networks. NetFlow accounting is a reliable way to peer through the wire and get a deeper insight to the traffic that traverses your environment. Many Network Engineers and Security Specialists will agree that their understanding of their environments hinges on the level of insight they glean from their monitoring solutions.

This makes NetFlow an ideal traffic accounting medium, since it easily collects and exports data from virtually any connected device for analysis by a CySight . The technology’s standing in the industry has made it the “go-to” solution for curating detailed, insightful and actionable metrics that move IT organizations from a reactive to proactive stance towards network optimization

Traffic accounting’s influence on business productivity and performance

As organizations become increasingly technology-centric in their business strategies, their reliance on networks that consistently perform at peak will increase accordingly. This places new pressures on Network Performance and Security Teams  to conduct iterative performance and capacity testing to contextualize their environment’s ability to perform when it matters most. NetFlow’s ability to provide contextual insights based on live and historic data means Network Operation Centers (NOCs)  are able to react to immediate performance hindrances and also predict with a fair level of accuracy what the challenges of tomorrow may hold. And this is worth gold in the context of the ever-changing and expanding networking landscape.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Integrated Cyber Network Intelligence: Your Network has been infiltrated. How do you know where and what else is impacted?

Why would you need Granular Network Intelligence?

“Advanced targeted attacks are set to render prevention-centric security strategies obsolete and that information must become the focal point for our information security strategies.” (Gartner)

In this webinar we take a look at the internal and external threat networks pervasive in todays enterprise and explore why organizations need granular network intelligence.

Webinar Transcription:

I’m one of the senior engineers here with CySight. I’ll be taking you through the webinar today. It should take about 30 to 40 minutes, I would say and then we will get to some questions towards the end. So let’s get started.

So the first big question here is, “Why would you need something like this? Why would you need Granular Network Intelligence?” And the answer, if not obvious already, is that, really, in today’s connected world, every incident response includes a communications component. What we mean by that is in a managed environment, whether it’s traditional network management or security management, anytime that there’s an alert or some sort of incident that needs to be responded to, a part of that response is always going to be communications, who’s talking to who, what did they do, how much bandwidth did they use, who did they talk to?

And in a security particular environment, we need to be looking at things like whether external threats or internal threats, was there a data breach, can I look at the historical behavior or patterns, can I put this traffic into context as per the sort of baseline of that traffic? So that insight into how systems have communicated is critical.

Just some background industry kind of information. According to Gartner, targeted attacks are set to render prevention-centric security strategies obsolete by 2020. Basically, what that means is there’s going to be a shift. They believe there’s going to be a shift to information and end-user-centric security focused on an infrastructure’s end-points and away from sort of the blocking and tackling of firewalls. They believe that there’ll be three big trends continuous compromise, meaning that an increased in level of advanced attacks, targeted attacks. It’s not going to stop. You’re never going to feel safe that someone won’t be potentially trying to attack you.

And most of those attacks will become financially motivated attacks, attempts to steal information and attempts to gather credit card data, if you have that, intellectual property, ransomware-type attacks. So this is not necessarily, “Hey, I’m just going to try and bring down your website or something,” in a traditional world where maybe people are playing around a little bit. This is more organized attacks specifically designed to either elicit a ransom or a reward or just steal information that could be turned into money out in a black market and it’s going to be more and more difficult for IT to have control over those end-user’s devices.

Again, very few organizations just have people sitting at their desks with desktop computers anymore. Everybody’s got laptops. Everybody’s got a phone or other tablet that’s moving around. People work from home. They work from the road. They’re connecting in to network resources from anywhere in the world at any time and it becomes more and more challenging for IT to sort of control those pathways of communications. So if you can’t control it, then you have to certainly be able to monitor it and react to it and the reaction is really in three major ways; determining the origin of the attack, the nature of the attack, and the damage incurred.

So we’re certainly assuming that there are going to be attacks, and we need to know where they’re coming from, what they’re trying to do, and have they been able to get there? You know, have we caught it in time or has something already been infected or has information been taken away from the network and that really leads us into this little graphic that we have about not being in denial. Understanding that, unfortunately, many people, in terms of their real visibility into the network, are somewhere in the blind or limited-type area. They don’t know what they don’t know, they think they should know but they don’t know, and etc.

But where they really need to be is at, “There’s nothing they don’t know.” And they need tools to be able to move them from wherever they are into this upper left-hand quadrant and certainly, that’s what our product is designed to do. So just kind of looking at the entire landscape of information flow from outside and inside and really understanding that there are new kinds of attacks, crawlers, botnets, ransomware, ToR, DoS and DDoS attacks that have been around for a while.

Your network may be used to download or host illicit material, leak intellectual property, be part of an attack, you know, something that’s command and controlled from somewhere else and your internal assets have become zombies and are being controlled by outside. There are lots of different threats. They’re all coming at you from all over the place. They’re all trying to get inside your network to do bad things and those attacks or that communication needs to be tracked.

Gartner also believes that 60% of enterprise security budgets will be allocated for rapid detection and response by 2020, up from less than 10% just a few years ago. What they believe is that too much of the spending has gone into prevention and not enough has gone into monitoring and response. So the prevention is that traditional firewalling, intrusion detection or intrusion prevention, things like that, which certainly is important. I’m not saying that those things aren’t useful or needed. But what we believe and what other industry analysts certainly believe is that that’s not enough, basically. There needs to be more than the simple sort of “Put up a wall around it and no one will be able to get in” kind of situation. If that were the case, then there would be no incidents anywhere because everybody’s got a firewall; large companies, small companies. Everybody’s got that today, and yet, you certainly don’t go more than a couple of days without hearing about new hacks, new incidents.

Here in the United States, we just came through an election where they’re still talking about people from other countries hacking into one party or another’s servers to try and change the election results. You know, on the enterprise side, there are lots and lots of businesses. Yahoo recently in the last couple of months certainly had a major attack that they had to come clean about it and of course both of those organizations, certainly Yahoo, you know, they’re an IT system. They have those standard intrusion prevention and firewall-type systems, but obviously, they aren’t enough.

So when you are breached, you need to be able to look and see what happened, “What can I still identify, what can I still control, and how do I get visibility as to what happened.” So for us, we believe that the information about the communication is the most important focal point for a security strategy and we can look at a few different ways to do that without a signature-based mechanism. So there’s ways to look at normal traffic and be able to very rapidly identify deviation from normal traffic. There’s ways to find outliers and repeat offenders. There’s ways to find nefarious traffic by correlating real-time threat feeds with current flows and we’re going to be talking about all of these today so that a security team can identify what was targeted, what was potentially compromised, what information may have left the building, so to speak.

There’s a lot of challenges faced by existing firewalls, SIEM, and loosely-coupled toolsets. The level of sophistication, it’s going up and up again. It’s becoming more organized. It’s an international crime syndicate with very, very intelligent people using these tactics to try and gain money. As we’ve talked about, blocking attack, laying end-point solutions are just not enough anymore and of course, there’s a huge cost in trying to deploy, trying to maintain multiple solutions.

So being able to try and have some tools that aren’t incredibly expensive, that do give you valuable information really, can become the best way to go. If you look at, say, what we’re calling sensors; packet captures, DPI-type systems. They, certainly, can do quite a lot, but they’re incredibly expensive to deploy across a large organization. If you’re trying to do packet capture, it’s very, very prohibitive. You can get a lot of detail, but trying to put those sensors everywhere is just… unless you’ve got an unlimited budget, and very few people do, that becomes a really difficult proposition to swallow.

But that doesn’t mean NetFlow can’t still use that kind of information. What we have found and what’s really been a major trend over the last couple of years is that existing vendors, on their devices, Check Point, Cisco, Palo Alto, packet brokers like Ixia, or all of the different people that you see up here, and more and more all the time, are actually adding that DPI information into their flow data. So it’s not separate from flow data. It’s these devices that have the packets going through them that can look at them all the way to layer seven and then include that information in the NetFlow export out to a product like ours that can collect it and display that.

So you can look into payload and classify according to payload content identifying traffic on port 80 or what have you, that you can connect the dots between inside and outside when there’s NAT. To be able to read the URLs and quickly analyze where they’re going and what they’re being used for. Getting specialized information like MAC address information or, if it’s a firewall, getting denial information or AAA information, if it’s a wireless LAN controller, getting SSID information, and other kinds of things that can be very useful to track down where people were talking.

So different types of systems are adding different kinds of information to the exports, but all of them, together, really effectively give you that same capability as if you had those sniffing products all over the place or packet capture products all over the place. But you can do it right in the devices, right from the manufacturer, send it through NetFlow, to us, and still get that quality information without having to spend so much money to do it.

The SANS organization, if you’re not familiar with them, great organization, provide a lot of good information and whitepapers and things like that. They have, very often, said that NetFlow might be the single most valuable source of evidence in network investigations of all sorts, security investigations, performance investigations, whatever it may be.

The NetFlow data can give you very high value intelligence about the communications. But the key is in understanding how to get it and how to use it. Some other benefits of using NetFlow, before packet capture is the lack of need for huge storage requirements. Certainly, as compared to traditional packet capture, NetFlow is much skinnier than that and you can store much longer-term information than you could if you had to store all of the packets. The cost, we’ve talked about.

And there are some interesting things like legal issues that are mitigated. If you are actually capturing all packets, then you may run into compliance issues for things like PCI or HIPAA. In certain different countries and jurisdictions around the world have very strict regulations about maintaining the end-data and keeping that data. NetFlow, you don’t have that. It’s metadata. Even with the new things that you can get, that we talked about a couple of slides ago, it’s still the metadata. It’s still data about the data. It’s not the actual end information. So even without that content, NetFlow still provides an excellent means of guiding the investigations, especially in an attack scenario.

So here, if you bundle everything that we’ve talked about so far into one kind of view and relate it to what we do here at CySight. You would see it on this screen. There are the end-users of people/content and things today, the Internet of things. So you’ve got data coming from security cameras and Internet-connected vehicles and refrigerators. It could be just about anything, environmental-type information. It’s all producing data. That data is traversing the network through multiple different types of platforms, or routers, switches, servers, wireless LAN controllers, cloud-based systems and so forth, all of which can provide correlation of the information and data. We call that the correlation API.

We then take that data into CySight. We combine it with outside big data, we’re going to talk about that in a minute, so not only the data of the connections but actual third-party information that we have related to known bad actors in the world and then we can use that information to provide you, the user, multiple benefits, whether it’s anomaly detection, threat intelligence, security performance, network accounting, all of the sort of standard things that you would do with NetFlow data.

And then lastly, integrate that data out to other third-party systems, whether it’s your managed service provider or security service provider. It could be upstream event collectors, trappers, log systems, SOAPA ecosystems, whether that’s on-premise or in the cloud or hybrid cloud. All of that is available via our product. So it starts at the traffic level. It goes through everything. It provides the data inside our product and as well as integrates out to third-party systems.

So let’s actually look into this a little more deeply. So the threat intelligence information is one of the two major components of our cyber security areas. One, the way this works is that threat data is derived from a large number of sources. So we maintain a list, effectively, a database of known bad IP addresses, known bad actors in the world. We collect that data through honeypots, and threat feeds, and crowd sources, and active crawlers, and our own internal user cyber feedback from our customers and all of that information combined allows us to maintain a very robust list of known bads, basically. Then we can combine that cyber intelligence data with the connection data, the flow data, the session data, inside and outside of your network, you know, the communications that you’re having, and compare the two.

So we have the big data threats. We can process that data along with what’s happening locally in your network to provide extreme visibility, to find who’s talking to who, what conversations are your users having with bad actors, ransomware, botnets, ToR, hacking, malware, whatever it may be and we then provide, of course, that information to you directly in the product. So we’re constantly monitoring for that communication and then we can help you identify it and remediate it as soon as possible.

As we look into this a little bit   zoomed in here a little bit, you can see that that threat information can be seen in summary or in detail. We have it categorized by different threat levels, types, severities, countries of origin, affected IPs, threat IPs. As anyone who’s used our product in the past knows, we always provide an extreme amount of flexibility to really slice and dice the data and give you a view into it in any way that is best consumed by you. So you can look at things by type, or by affected IP, or by threat IP, or by threat level, or whatever it may be and of course, no matter where you start, you can always drill in, you can filter, you can re-display things to show it in a different view.

Here’s an example of identifying some threat. These are ransomware threats, known ransomware IPs out there. I can very easily just right-click on that and say, “Show me the affected IP.” So I see that there’s ransomware. Who’s affected by that? Who is actually talking to that? And it’s going to drill right down into that affected IP or maybe multiple affected IPs that are known to be talking to those ransomware systems outside. You could see when it happened. You can see how much traffic.

Certainly, in this example our top affected IP here certainly has a tremendous amount of data, 307 megs over that time period, much more than the next ones below that and so that’s clearly one that needs to be identified or responded to very quickly. It can be useful to look at this way, to see if, “Hey,” you know, “Is this one system that’s been infiltrated or is it now starting to spread? Are there multiple systems? Where is it starting? Where is it going and how can I then sort of stem that tide?” It very easy to get that kind of information.

Here’s another example showing all ransomware attack, traffic, traversing a large ISP over a day. So whether you’re an end-user or certainly a service provider, we have many, many service provider customers that use this to monitor their customer’s traffic and so this could be something that you look at to say “Across all of my ISP, where is that ransomware traffic going? Maybe it’s not affecting me but it’s affecting one of my customers.” Then we can be able to drill into that and to alert and alarm on that, potentially block that right away as extra help to my customers.

Ransomware is certainly one of the most major scary sort of things that’s out there now. It’s happening every day. There are reports of police stations having to pay ransom to get their data back, hospitals having to pay ransom to get their data back. It’s kind of interesting that, to our knowledge, there has never been a case where the ransomers, the bad guys out there haven’t actually released the information back to their customers and supply the decryption key. Because they want the money and they want people to know, “Hey, if you pay us, we will give you your data back,” which is really, really frightening, actually. It’s happening all the time and needs to be monitored very, very carefully. This is certainly one of the major threats that exist today.

But there are other threats as well; peer-to-peer traffic, ToR traffic, things like that. Here’s an example of looking at a single affected IP that is talking to multiple different threat IPs that are known to have been hosting illicit content over this time period. You could see that, clearly, it’s doing something. You know, if there is one host that is talking to one outside illicit threat IP, okay, maybe that’s a coincidence or maybe it’s not an indication of something crazy going on. But when you can see that, in this case, there’s one internal IP talking to 89 known bad threat IPs who have been known to host illicit traffic, okay, that’s not a coincidence anymore. We know that something’s happening here. We can see when it happened. We know that they’re doing something. Let’s go investigate that. So that’s just another way of kind of giving you that first step to identify what’s happening and when it’s happening.

You know, sometimes, illicit traffic may just look like some obscured peer-to-peer content but it actually…Auditor, our product allows you to see it for full forensic evidence. You know, you could see what countries are talking to, what kind of traffic it is what kind of threat level it is. It really gives you that full-detailed data about what’s happening.

Here’s another example of a ToR threat. So people who are trying to use ToR to anonymize their data or get around any kind of traffic analysis-type system will use ToR to try and obfuscate that data. But we have, as part of our threat data, a list of ToR exits and relays and proxies, and we can look at that and tell you, again, who’s sending data into this sort of the ToR world out there, which may be an indication of ransomware and other malware because they often use ToR to try and anonymize that data. But it, also, could be somebody inside the organization that’s trying to do something they shouldn’t be doing, get data out which could be very nefarious. You never want to think the worst of people but it does happen. It happens every day out there. So again, that’s another way that we can give you some information about threats.

We, also, can help you visualize the threats. Sometimes, it’s easier for those to understand by looking at a nice graphical depiction. So we can show you where the traffic is moving, with the volume of traffic, how it’s hopping around in, in this case a ToR endpoint. ToR is weird. The point of ToR is that it’s very difficult to find an endpoint from another single endpoint. But being able to visualize it together actually allows you to kind of get a hand on where that traffic may be going.

In really large service providers where, certainly, people who are interested in tracking this stuff down, they need a product that can scale. We’ve got a very, very great story about our massive scalability. We can use a hierarchical system. We can add additional collectors. We can do a lot of different things to be able to handle a huge volume of traffic, even for Tier 1-type service providers, and still provide all of this data and detail that we’ve shown so far.

A couple other examples, we just have a number of them here, of different ways that you can look at the traffic and slice and dice it. Here’s an example of top conversations. So looking for that spike in traffic, we could see that there was this big spike here, suddenly. Almost 200 gig in one hour, that’s very unusual and can be identified very, very quickly and then you can try and say, “Okay, what were you doing during that time period? How could it possibly be that that much information was being sent out the door in such a short period of time?”

We also have port usage. So we can look at individual ports that are known threats over whatever time period you’re interested in. We could see this is port 80 traffic but it’s actually connecting to known ToR exits. So that is not just web surfing. You can visualize changes over time, you can see how things are increasing over time, and you can identify who is doing that to you.

Here’s another example of botnet forensics. Understanding a conversation to a known botnet command and control server and so many times, those come through, initially, as a phishing email. So they’ll just send millions of spam emails out there hoping for somebody to click on it. When they do click on it, it downloads the command and control software and then away it goes. So you can actually kind of see the low-level continual spam happening, and then all of a sudden, when there’s a spike, you actually get that botnet information, the command and control information that starts up and from there all kinds of bad things can happen.

So identifying impacted systems that have more than one infection is a great way to really sort of prioritize who you should be looking at. We can give you that data. I could see this IP has got all kinds of different threats that it’s been communicating to and with. You know, that is certainly someone that you want to take a look at very quickly.

I talked about visualization, some. Here are a few more examples of visualizations in the product. Many of our customers use this. It’s kind of the first way that they look at the data and then drill into the actual number part of the data, sort of after the visualization. Because you could see, from a high-level, where things are going and then say, “Okay, let me check that out.”

Another thing that we do as part of our cyber bundle, if you will, is anomaly detection and what we call “Two-phased Anomaly Detection.” Most of what I’ve talked about so far has been related to threat detection, matching up those known bads to conversations or communications into and out of your network. But there are other ways to try and identify security problems as well. One of those is anomaly detection.

So anomaly detection is an ability of our product to baseline traffic in your network, lots of different metrics on the traffic. So it’s counts, and flows, and packets, and bytes, and bits per second, and so forth, TCP flags, all happening all the time. So we’re baselining all the time, hour over hour, day over day and week over week to understand what is normal and then use our sophisticated behavior-based anomaly detection, our machine learning ability to identify when things are outside the norm.

So phase one is we baseline so that we know what is normal and then alert or identify when something is outside the norm and then phase two is running a diagnostic process on those events, so understanding what was that event, when did it happen, what kind of traffic was involved, what IPs and ports were involved, what interfaces did the traffic go through, what does it possibly pretend, was it a DDoS-type attack, was it port sweeper or crawler-type attack – what was it? And then the result of that is our alert diagnostic screen like you can see in the background.

So it qualifies the cause and impact for each offending behavior. It gives you the KPI information. It generates a ticket. It allows you to integrate with other third-party SNMP traps, trap receivers so we can send our alerts and diagnostic information out as a trap to another system and so everything can be rolled up into a more manager and manager-type system, if you wish. You can intelligently whitelist traffic that is not really offensive traffic that we may have identified as an anomaly. So of course, you want to reduce the amount of false positives out there and we can help you do that.

So to kind of summarize…I think we’re just about at the end of the presentation now. To summarize, what can CySight do in our cyber intelligence? It really comes down to forensics, anomaly detection, and that threat intelligence. We can record and analyze, on a very granular level, network data even in extremely complex, large, and challenging environments. We can evaluate what is normal versus what is abnormal. We can continually monitor and benchmark your network and assets. We can intelligently baseline your network to detect activity that deviates from those baselines. We can continuously monitor for communication with IPs of poor reputation and remediate it ASAP to reduce the probability of infection and we can help you store and compile that flow information to use as evidence in the future.

You’re going to end up with, then, extreme visibility into what’s happening. You’re going to have three-phase detection. You have full alerting and reporting. So any time any of these things do happen, you can get an alert. That alert can be an email. It can be a trap out to another system as I mentioned earlier. Things can be scheduled. They’re running in the background 24/7 keeping our software’s eyes on your network all the time and then give you that forensics drill-down capability to quickly identify what’s happened, what’s been impacted, and how you can stop its spread.

The last thing we just want to say is that everything that we’ve shown today is the result of a large development effort over the last number of years. We’ve been in business for over 10 years, delivering NetFlow-based Predictive AI Baselining analytics. We’ve really taken a very heavy development exercise into security over the last few years and we are constantly innovating. We’re constantly improving. We’re constantly listening to what our customers want and need and building that into future releases of the product.

So if you are an existing customer listening to this, we’d love to hear your feedback on what we can do better. If you are potentially a new customer on this webinar, we’d love your ideas from what you’ve seen as to if that fits with what you need or if there’s other things that you would like to see in the product. We really do listen to our customers quite extensively and because of that, we have a great reputation with our customers.

We have a list of customers up here. We’ve got some great quotes from our customers. We really do play across an entire enterprise. We play across service providers and we love our customers and we think that they know that and that’s why they continue to stay with us year after year and continue to work with us to make the product even better.

So we want to thank everybody for joining the webinar today. We’re going to just end on this note that we believe that our products offer the most cost-effective approach to detect threats and quantify network traffic ubiquitously across everything that you might need in the security and cyber network intelligence arena and if you have any interest in talking to us, seeing a demo, live demo of the product, getting a 30-day evaluation of the product, we’re very happy to talk to you. Just contact us.

If you’ve got a salesperson and you want to get threat intelligence, we’re happy to enable it on your existing platform. If you are new to us, hit our website, please, at cysight.ai. Fill out the form for a trial, and somebody will get to you immediately and we’ll get you up in the system and running very, very quickly and see if we can help you identify any of these security threats that you may have. So with that, we appreciate your time and look forward to seeing you at our webinar in the future. Bye.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

The Strategic Value of Advanced Netflow for Enterprise Network Security

With thousands of devices going online for the first time each minute, and the data influx continuing unabated, it’s fair to say that we’re in the throes of an always-on culture.

As the network becomes arguably the most valuable asset of the 21st century business, IT departments will be looked at to provide not just operational functions, but, more importantly, strategic value.

Today’s network infrastructures contain hundreds of key business devices across a complex array of data centers, virtualized environments and services. This means Performance and Security Specialists are demanding far more visibility from their monitoring systems than they did only a few years ago.

The growing complexity of modern IT infrastructure is the major challenge faced by existing network monitoring (NMS) and security tools.

Expanding networks, dynamic enterprise boundaries, network virtualization, new applications and processes, growing compliance and regulatory mandates along with rising levels of sophistication in cyber-crime, malware and data breaches, are some of the major factors necessitating more granular and robust monitoring solutions.

Insight-based and data-driven monitoring systems must provide the deep visibility and early warning detection needed by Network Operations Centre (NOC) teams and Security professionals to manage networks today and to keep the organization safe.

For over two decades now, NetFlow has been a trusted technology which provides the data needed to enable the performance management of medium to large environments.

Over the years, NetFlow analysis technology has evolved alongside the networks it helps optimize to provide information-rich analyses, detailed reporting and data-driven network management insights to IT departments.

From traffic accounting, to performance management and security forensics, NetFlow brings together both high-level and detailed insights by aggregating network data and exporting it to a flow collector for analysis. Using a push-model makes NetFlow less resource-intensive than other proprietary solutions as it places very little demand on network devices for the collection and analysis of data.

NetFlow gives NOCs the information they need for pervasive deep network visibility and flexible Predictive AI Baselining analytics, which substantially reduces management complexity. Performance and Security Specialists enjoy unmatched flexibility and scalability in their endeavors to keep systems safe, secure, reliable and performing at their peak.

Although the NetFlow protocol promises a great deal of detail that could be leveraged to the benefit of the NOC and Security teams, many NetFlow solutions to date have failed to provide the contextual depth and flexibility required to keep up with the evolving network and related systems. Many flow solutions simply cannot scale to archive the necessary amount of granular network traffic needed to gain the visibility required today. Due to the limited amount of usable data they can physically retain, these flow solutions are used for only basic performance traffic analysis or top talker detection and cannot physically scale to report on needed Predictive AI Baselining analytics making them only marginally more useful than an SNMP/RMON solution.

The newest generation of NetFlow tools must combine the granular capability of a real-time forensics engine with long-term capacity planning and data mining abilities.

Modern NetFlow applications should also be able to process the ever expanding vendor specific flexible NetFlow templates which can provide unique data points not found in any other technology.

Lastly, the system needs to offer machine-learning intelligent analysis which can detect and alert on security events happening in the network before the threat gets to the point that a human would notice what has happened.

When all of the above capabilities are available and put into production, a NetFlow system become an irreplaceable application in an IT department’s performance and security toolbox.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

Benefits of a NetFlow Performance Deployment in Complex Environments

Since no two environments are identical and no network remains stagnant in Network Monitoring today, the only thing we can expect is the unexpected!

The network has become a living dynamic and complex environment that requires a flexible approach to monitor and analyze. Network and Security teams are under pressure to go beyond simple monitoring techniques to quickly identify the root causes of issues, de-risk hidden threats and to monitor network-connected things.

A solution’s flexibility refers to not only its interface but also the overall design.

From a user interface perspective, flexibility refers to the ability to perform analysis on any combination of data fields with multiple options to view, sort, cut and count the analysis.

From a deployment perspective, flexibility means options for deployment on Linux or Windows environments and the ability to digest all traffic or scale collection with tuning techniques that don’t fully obfuscate the data.

Acquiring flexible tools are a superb investment as they enrich and facilitate local knowledge retention. They enable multiple network centric teams to benefit from a shared toolset and the business begins to leverage the power of big data Predictive AI Baselining analytics that, over time, grows and extends beyond the tool’s original requirements as new information becomes visible.

What makes a Network Management System (NMS) truly scalable is its ability to analyze all the far reaches of the enterprise using a single interface with all layers of complexity to the data abstracted.

NetFlow, sFlow, IPFIX and their variants are all about abstracting routers, switches, firewalls or taps from multiple vendors into a single searchable network intelligence.

It is critical to ensure that abstraction layers are independently scalable to enable efficient collection and be sufficiently flexible to enable multiple deployment architectures to provide low-impact, cost-effective solutions that are simple to deploy and manage.

To simplify deployment and management it has to work out the box and be self-configuring and self-healing. Many flow monitoring systems require a lot of time to configure or maintain making them expensive to deploy and hard to use.

A flow-based NMS needs to meet various alerting, Predictive AI Baselining analytics, and architectural deployment demands. It needs to adapt to rapid change, pressure on enterprise infrastructure and possess the agility needed to adapt at short notice.

Agility in provisioning services, rectifying issues, customizing and delivering alerts and reports and facilitating template creation, early threat detection and effective risk mitigation, all assist in propelling the business forward and are the hallmarks of a flexible network management methodology.

Here are some examples that require a flexible approach to network monitoring:

  • DDoS attack behavior changes randomly
  • Analyze Interface usage by Device by Datacenter by Region
  • A new unknown social networking application suddenly becomes popular
  • Compliance drives need to discover Insider threats and data leakages occurring under the radar
  • Companies grow and move offices and functions
  • Laws change requiring data retention suitable for legal compliance
  • New processes create new unplanned pressures
  • New applications cause unexpected data surges
  • A vetted application creates unanticipated denials of service
  • Systems and services become infected with new kinds of malicious agents
  • Virtualization demands abruptly increase
  • Services and resources require a bit tax or 95th percentile billing model
  • Analyzing flexible NetFlow fields supported by different device vendors such as IPv6, MPLS, MAC, BGP, VPN, NAT paths, DNS, URL, Latency etc.
  • Internet of Things (IoT) become part of the network ecosystem and require ongoing visibility to manage

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

How to Achieve Security and Data Retention Compliance Obligations with Predictive AI Cyber Flow Analytics

Information retention, protection and data compliance demands are an important concern for modern organizations. And with data being generated at staggering rates and new entry points to networks (mobile devices, wireless network, etc.) adding their own levels of complexity, adherence to compliance obligations can prove challenging. In addition, when considering high profile network hacks such as the Sony, Dropbox and Target intrusions, it quickly becomes clear that no organization is immune to the possibility of having their systems compromised. This backdrop demonstrates the importance of finding a suitable network monitoring solution that is able to navigate the tightrope between meeting regulatory requirements without placing too much strain on hardware resources.

In this blog we’ll touch on two of these regulatory standards: the Health Insurance Portability and Accountability Act (HIPAA) and Supervisory Control and Data Acquisition (SCADA), and look at how Network Specialists can leverage NetFlow’s ability to provide insightful metrics that aid in the building of a water-tight security apparatus.

NetFlow and HIPAA

Few have greater concerns around information privacy than the health care industry. If compromised, medical records containing patients’ sensitive information can lead to disaster for both health care organizations and individuals. The Privacy Rule, as stipulated by HIPAA, addresses the data retention compliance and protection measures expected of health care organizations to ensure critical patient records remain safe, uncompromised and reliable.

One of these protection measures is the continuous monitoring of information systems to prevent security breaches or unintended exposure of information to the wrong people. NetFlow is ideal for monitoring and enforcing security by giving detailed insight into both local, inbound and outbound traffic. It also allows you to easily identify the nature of the traffic and see how traffic flows between devices as it traverses your environment.

NetFlow’s ability to detect and report on anomalies through analysis by a NetFlow analyzer can give health care organizations unmatched network visibility and data granularity. Its availability on most networking devices makes it ideal for deployment in and monitoring of large-scale environments such as hospitals and other health care facilities. Also, flow exports to NetFlow analyzers are comparatively lightweight, which makes it possible for organizations to collect and store network audit data for extended periods of time.

NetFlow and SCADA

SCADA is a standard that facilitates communication channels between remote equipment as a means to control their functions. Examples of SCADA at work are remote management of Heating Ventilation and Air Conditioning (HVAC) systems, industrial equipment and Closed Circuit Television systems. SCADA is a type of industrial control system (ICS). Security around SCADA-enabled systems are paramount to human safety, as typical utilization of SCADA include sewerage systems, power plant and water treatment facilities. Also, these management systems typically communicate via the Internet, making them vulnerable to hackers who may seek to use them as entry points into corporate networks.

NetFlow provides built-in support for SCADA and facilitates real-time monitoring and management of communication between remote devices, making it possible to take corrective action on-the-fly if needs be. It also enables users to make operational decisions based on both real-time and historic data that gives context to anomalies and events as they occur. Users are also able to perform functions remotely without visiting sites to perform updates and other maintenance tasks. By providing detailed and up-to-date information on business-critical systems, NetFlow is enabling businesses to be more proactive in the monitoring, management and maintenance of remote devices and systems.

Employing the right NetFlow reporting tool is key to manage compliance obligations

The missing link in leveraging the power of NetFlow in data retention and protection efforts is a powerful, comprehensive and robust NetFlow reporting tool. When considering your regulatory obligations, ensure that your choice of NetFlow reporting tool gives you the detailed, granular and contextual information you need to make insightful, data-driven decisions around the security, integrity and stability of your information assets.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health