Archives

Category Archive for ‘Predictive AI Baselining’

Cyberwar Defense using Predictive AI Baselining

The world is bracing for a worldwide cyberwar as a result of the current political events. Cyberattacks can be carried out by governments and hackers in an effort to destabilize economies and undermine democracy. Rather than launching cyberattacks, state-funded cyber warfare teams have been studying vulnerabilities for years.

An important transition has occurred, and it is the emergence of bad actors from unfriendly countries that must be taken seriously. The most heinous criminals in this new cyberwarfare campaign are no longer hiding. Experts now believe that a country could conduct more sophisticated cyberattacks on national and commercial networks. Many countries are capable of conducting cyberattacks against other countries, and all parties appear to be prepared for cyber clashes.

So, how would cyberwarfare play out, and how can organizations defend against them?

The first step is to presume that your network has been penetrated or will be compromised soon, and that several attack routes will be employed to disrupt business continuity or vital infrastructure.

Denial-of-service (DoS/DDoS) attacks are capable of spreading widespread panic by overloading network infrastructures and network assets, rendering them inoperable, whether they are servers, communication lines, or other critical technologies in a region.

In 2021, ransomware became the most popular criminal tactic, but country cyber warfare teams in 2022 are now keen to use it for first strike, propaganda and military fundraising. It is only a matter of time before it escalates. Ransomware tactics are used in politically motivated attacks to encrypt computers and render them inoperable. Despite using publicly accessible ransomware code, this is now considered weaponized malware because there is little to no possibility that a key to decode will be released. Ransomware assaults by financially motivated criminals have a different objective, which must be identified before causing financial and social damage, as detailed in a recent RANSOMWARE PAPER

To win the cyberwar on either cyber extortion or cyberwarfare attacks, you must first have complete 360-degree view into your network and deep transparency and intelligent context to detect dangers within your data.

Given what we already know and the fact that more is continually being discovered, it makes sense to evaluate our one-of-a-kind integrated Predictive AI Baselining and Cyber Detection solution.

YOU DON’T KNOW WHAT YOU DON’T KNOW!

AND IT’S WHAT WE DON’T SEE THAT POSES THE BIGGEST THREATS AND INVISIBLE DANGERS!

You may be surprised to learn that most tools lack the REAL Visibility that could have prevented attacks on a network and its local and cloud-connected assets. There are some serious shortcomings in the base designs of other flow solutions that result in their inability to scale in retention.

This is why smart analysts are realizing that Threat Intelligence and Flow Analytics today are all about having access to long-term granular intelligence. From a forensics perspective, you would appreciate that you can only analyze the data you retain, and with large and growing network and cloud data flows most tools (regardless of their marketing claims) actually cannot scale in retention and choose to drop records in lieu of what they believe is salient data.

Imputed outcome data leads to misleading results and missing data causes high risk and loss!

Funnel_Loss_Plus_Text

So how exactly do you go about defending your organizations network and connected assets?

Our approach with CySight focuses on solving Cyber and Network Visibility using granular Collection and Retention with machine learning and A.I.

CySight was designed from the ground up with specialized metadata collection and retention techniques thereby solving the issues of archiving huge flow feeds in the smallest footprint and the highest granularity available in the marketplace.

Network issues are broad and diverse and can occur from many points of entry, both external and internal. The network may be used to download or host illicit materials and leak intellectual property.

Additionally, ransomware and other cyber-attacks continue to impact businesses. So you need both machine learning and End-Point threats to provide a complete view of risk.

The Idea of flow-based analytics is simple yet potentially the most powerful tool to find ransomware and other network and cloud issues. All the footprints of all communications are sent in the flow data and given the right tools you could retain all the evidence of an attack or infiltration or exfiltration.

However, not all flow analytic solutions are created equal and due to the inability to scale in retention the Netflow Ideal becomes unattainable. For a recently discovered Ransomware or Trojan, such as “Wannacry”, it is helpful to see if it’s been active in the past and when it started.

Another important aspect is having the context to be able to analyze all the related traffic to identify concurrent exfiltration of an organization’s Intellectual Property and to quantify and mediate the risk. Threat hunting for RANSOMWARE requires multi-focal analysis at a granular level that simply cannot be attained by sampling methods. It does little good to be alerted to a possible threat without having the detail to understand context and impact. The Hacker who has control of your system will likely install multiple back-doors on various interrelated systems so they can return when you are off guard.

CySight Turbocharges Flow and Cloud analytics for SecOps and NetOps

As with all CySight Predictive AI Baselining analytics and detection, you don’t have to do any heavy lifting. We do it all for you!

There is no need to create or maintain special groups with Ransomware or other endpoints of ill-repute. Every CySight instance is built to keep itself aware of new threats that are automatically downloaded in a secure pipe from our Threat Intelligence qualification engine that collects, collates, and categorizes threats from around the globe or from partner threat feeds.

CySight Identifies your systems conversing with Bad Actors and allows you to backtrack through historical data to see how long it’s been going on.

Summary

IdeaData’s CySight software is capable of the highest level of granularity, scalability, and flexibility available in the network and cloud flow metadata market and supports the broadest range of flow-capable vendors and flow logs.

CySight’s Predictive AI Baselining, Intelligent Visibility, Dropless Collection, automation, and machine intelligence reduce the heavy lifting in alerting, auditing, and discovering your network making threat intelligence, anomaly detection, forensics, compliance, performance analytics and IP accounting a breeze!

Let us help you today. Please schedule a time to meet https://calendly.com/cysight/

Advanced Predictive AI leveraging Granular Flow-Based Network Analytics.

IT’S WHAT YOU DON’T SEE THAT POSES THE BIGGEST THREATS AND INVISIBLE DANGERS.

Existing network management and network security point solutions are facing a major challenge due to the increasing complexity of the IT infrastructure.

The main issue is a lack of visibility into all aspects of physical network and cloud network usage, as well as increasing compliance, service level management, regulatory mandates, a rising level of sophistication in cybercrime, and increasing server virtualization.

With appropriate visibility and context, a variety of network issues can be resolved and handled by understanding the causes of network slowdowns and outages, detecting cyber-attacks and risky traffic, determining the origin and nature, and assessing the impact.

It’s clear that in today’s work-at-home, cyberwar, ransomware world, having adequate network visibility in an organization is critical, but defining how much visibility is considered “right” visibility is becoming more difficult, and more often than not even well-seasoned professionals make incorrect assumptions about the visibility they think they have. These misperceptions and malformed assumptions are much more common than you would expect and you would be forgiven for thinking you have everything under control.

When it comes to resolving IT incidents and security risks and assessing the business impact, every minute counts. The primary goal of Predictive AI Baselining coupled with deep contextual Network Forensics is to improve the visibility of Network Traffic by removing network blindspots and identifying the sources and causes of high-impact traffic.

Inadequate solutions (even the most well-known) mislead you into a false level of comfort but as they tend to only retain the top 2% or 5% of network communications frequently cause false positives and red herrings. Cyber threats can come from a variety of sources. These could be the result of new types of crawlers or botnets, infiltration and ultimately exfiltration that can destroy a business.

Networks are becoming more complex. Because of negligence, failing to update and patch security holes, many inadvertent threats can open the door to malicious outsiders. Your network could be used to download or host illegal materials, or it could be used entirely or partially to launch an attack. Ransomware attacks are still on the rise, and new ways to infiltrate organizations are being discovered. Denial of Service (DoS) and distributed denial of service (DDoS) attacks continue unabated, posing a significant risk to your organization. Insider threats can also occur as a result of internal hacking or a breach of trust, and your intellectual property may be slowly leaked as a result of negligence, hacking, or being compromised by disgruntled employees.

Whether you are buying a phone a laptop or a cyber security visibility solution the same rule applies and that is that marketers are out to get your hard-earned cash by flooding you with specifications and solutions whose abilities are radically overstated. Machine Learning  (ML) and Artificial Intelligence (AI) are two of the most recent to join the acronyms. The only thing you can know for sure dear cyber and network professional reader is that they hold a lot of promise.

One thing I can tell you from many years of experience in building flow analytics, threat intelligence, and cyber security detection solutions is that without adequate data your results become skewed and misleading. Machine Learning and AI enable high-speed detection and mitigation but without Granular Analytics (aka Big Data) you won’t know what you don’t know and neither will your AI!

In our current Covid world we have all come to appreciate, in some way, the importance of big data, ML and AI that if properly applied, just how quickly it can help mitigate a global health crisis. We only have to look back a few years when drug companies didn’t have access to granular data the severe impact that poor data had on people’s lives. Thalidomide is one example. In the same way, when cyber and network visibility solutions are only surface scraping data information will be incorrect and misleading and could seriously impact your network and the livelihoods of the people you work for and together with.

The Red Pill or The Blue Pill?

The concept of flow or packet-based analytics is straightforward, yet they have the potential to be the most powerful tools for detecting ransomware and other network and cloud-related concerns. All communications leave a trail in the flow data, and with the correct tools, you can recover all evidence of an assault, penetration, or exfiltration.

Not all analytic systems are made equal, and the flow/packet ideals become unattainable for other tools because of their difficulty to scale in retention. Even well-known tools have serious flaws and are limited in their ability to retain complete records, which is often overlooked. They don’t effectively provide the visibility of the blindspots they claimed.

As already pointed out, over 95% of network and deep packet inspection (DPI) solutions struggle to retain even 2% to 5% of all data captured in medium to large networks, resulting in completely missing diagnoses and delivering significantly misleading analytics that leads to misdiagnosis and risk!

It is critical to have the context and visibility necessary to assess all relevant traffic to discover concurrent intellectual property exfiltration and to quantify and mitigate the risk. It’s essential to determine whether a newly found Trojan or Ransomware has been active in the past and when it entered and what systems are still at risk.

Threat hunting demands multi-focal analysis at a granular level that sampling, and surface flow analytics methods just cannot provide. It is ineffective to be alerted to a potential threat without the context and consequence. The Hacker who has gained control of your system is likely to install many backdoors on various interconnected systems to re-enter when you are unaware. As Ransomware progresses it will continue to exploit weaknesses in Infrastructures.

Often those most vulnerable are those who believe they have the visibility to detect.

Network Matrix of Knowledge

Post-mortem analysis of incidents is required, as is the ability to analyze historical behaviors, investigate intrusion scenarios and potential data breaches, qualify internal threats from employee misuse, and quantify external threats from bad actors.

The ability to perform network forensics at a granular level enables an organization to discover issues and high-risk communications happening in real-time, or those that occur over a prolonged period such as data leaks. While standard security devices such as firewalls, intrusion detection systems, packet brokers or packet recorders may already be in place, they lack the ability to record and report on every network traffic transfer over the long term.

According to industry analysts, enterprise IT security necessitates a shift away from prevention-centric security strategies and toward information and end-user-centric security strategies focused on an infrastructure’s endpoints, as advanced targeted attacks are poised to render prevention-centric security strategies obsolete and today with Cyberwar a reality that will impact business and government alike.

As every incident response action in today’s connected world includes a communications component, using an integrated cyber and network intelligence approach provides a superior and cost-effective way to significantly reduce the Mean Time To Know (MTTK) for a wide range of network issues or risky traffic, reducing wasted effort and associated direct and indirect costs.

Understanding The shift towards Flow-Based Metadata

for Network and Cloud Cyber-Intelligence

  • The IT infrastructure is continually growing in complexity.
  • Deploying packet capture across an organization is costly and prohibitive especially when distributed or per segment.
  • “Blocking & tackling” (Prevention) has become the least effective measure.
  • Advanced targeted attacks are rendering prevention‑centric security strategies obsolete.
  • There is a Trend towards information and end‑user centric security strategies focused on an infrastructure’s end‑points.
  • Without making use of collective sharing of threat and attacker intelligence you will not be able to defend your business.

So what now?

If prevention isn’t working, what can IT still do about it?

  • In most cases, information must become the focal point for our information security strategies. IT can no longer control invasive controls on user’s devices or the services they utilize.

Is there a way for organizations to gain a clear picture of what transpired after a security breach?

  • Detailed monitoring and recording of interactions with content and systems. Predictive AI Baselining, Granular Forensics, Anomaly Detection and Threat Intelligence ability is needed to quickly identify what other users were targeted, what systems were potentially compromised and what information was exfiltrated.

How do you identify attacks without signature-based mechanisms?

  • Pervasive monitoring enables you to identify meaningful deviations from normal behavior to infer malicious intent. Nefarious traffic can be identified by correlating real-time threat feeds with current flows. Machine learning can be used to discover outliers and repeat offenders.

Summing up

Network security and network monitoring have gone a long way and jumped through all kinds of hoops to reach the point they have today. Unfortunately, through the years, cyber marketing has surpassed cyber solutions and we now have misconceptions that can do considerable damage to an organization.

The biggest threat is always the one you cannot see and hits you the hardest once it has established itself slowly and comfortably in a network undetected. Complete visibility can only be accessed through 100% collection and retention of all data traversing a network, otherwise even a single blindspot will affect the entire organization as if it were never protected to begin with. Just like a single weak link in a chain, cyber criminals will find the perfect access point for penetration.

Inadequate solutions that only retain the top 2% or 5% of network communications frequently cause false positives and red herrings. You need to have 100% access to your comms data for Full Visibility, but how can you be sure that you will?

You need free access to Full Visibility to unlock all your data and an Intelligent Predictive AI technology that can autonomously and quickly identify what’s not normal at both the macro and micro level of your network, cloud, servers, iot devices and other network connected assets.

Get complete visibility wiith CySight now –>>>

5 Ways Flow Based Network Monitoring Solutions Need to Scale

Partial Truth Only Results in Assumptions

A common gripe for Network Engineers is that their current network monitoring solution doesn’t provide the depth of information needed to quickly ascertain the true cause of a network issue. Imagine reading a book that is missing 4 out of every 6 words, understanding the context will be hopeless and the book has near to no value. Many already have over-complicated their monitoring systems and methodologies by continuously extending their capabilities with a plethora of add-ons or relying on disparate systems that often don’t interface very well with each other. There is also an often-mistaken belief that the network monitoring solutions that they have invested in will suddenly give them the depth they need to have the required visibility to manage complex networks.

A best-value approach to network monitoring is to use a flow-based analytics methodology such as NetFlow, sFlow or IPFIX.

The Misconception & What Really Matters

In this market, it’s common for the industry to express a flow software’s scaling capability in flows-per-second. Using Flows-per-second as a guide to scalability is misleading as it is often used to hide a flow collector’s inability to archive flow data by overstating its collection capability and enables them to present a larger number considering they use seconds instead of minutes. It’s important to look not only at flows-per-second but to understand the picture created once all the elements are used together. Much like a painting of a detailed landscape, the finer the brush and the more colors used will ultimately provide the complete and truly detailed picture of what was being looked at when drawing the landscape.

Granularity is the prime factor to start focusing on, specifically referring to granularity retained per minute (flow retention rate). Naturally, speed impediment is a significant and critical factor to be aware of as well. The speed and flexibility of alerting, reporting, forensic depth, and diagnostics all play a strategic role but will be hampered when confronted with scalability limitations. Observing the behavior when impacted by high-flow-variance or sudden-bursts and considering the number of devices and interfaces can enable you to appreciate the absolute significance of scalability in producing actionable insights and analytics.  Not to mention the ability to retain short-term and historical collections, which provide vital trackback information, would be nonexistent. To provide the necessary visibility to accomplish the ever-growing number of tasks analysts and engineers must deal with daily along with resolving issues to completion, a Network Monitoring System (NMS) must have the ability to scale in all its levels of consumption and retention.

How Should Monitoring Solutions Scale?

A Flow-Based network monitoring software needs to scale in its collection of data in five ways:

Ingestion Capability – Also referred to as Collection, means the number of flows that can be consumed by a single collector. This is a feat that most monitoring solutions are able to accomplish, unfortunately, it is also the one they pride themselves on. It is an important ability but is only the first step of several crucial capabilities that will determine the quality of insights and intelligence of a monitoring system. Ingestion is only the ability to take in data, it does not mean “retention”, and therefore could do very little on its own.

Digestion Capability – Also referred to as Retention, means the number of flow records that can be retained by a single collector. The most overlooked and difficult step in the network monitoring world. Digestion / Flow retention rates are particularly critical to quantify as they dictate the level of granularity that allows a flow-based NMS to deliver the visibility required to achieve quality Predictive AI Baselining, Anomaly Detection, Network Forensics, Root Cause Analysis, Billing Substantiation, Peering Analysis, and Data Retention compliance. Without retaining data, you cannot inspect it beyond the surface level, losing the value of network or cloud visibility.

Multitasking Processes– Pertains to the multitasking strength of a solution and its ability to scale and spread a load of collection processes across multiple CPUs on a single server.  This seems like an obvious approach to the collection but many systems have taken a linear serial approach to handle and ingest multiple streams of flow data that don’t allow their technologies to scale when new flow generating devices, interfaces, or endpoints are added forcing you to deploy multiple instances of a solution which becomes ineffective and expensive.

Clustered Collection – Refers to the ability of a flow-based solution to run a single data warehouse that takes its input from a cluster of collectors as a single unit as a means to load balance. In a large environment, you mostly have very large equipment that sends massive amounts of data to collectors. In order to handle all that data, you must distribute the load amongst a number of collectors in a cluster to multiple machines that make sense of it instead of a single machine that will be overloaded. This ability enables organizations to scale up in data use instead of dropping it as they attempt to collect it.

Hierarchical Correlation – The purpose of Hierarchical correlation is to take information from multiple databases and aggregate them into a single Super SIEM. With the need to consume and retain huge amounts of data, comes the need to manage and oversee that data in an intelligent way. Hierarchical correlation is designed to enable parallel analytics across distributed data warehouses to aggregate their results. In the field of network monitoring, getting overwhelmed with data to the point where you cannot find what you need is a as useful as being given all the books in the world and asked a single question that is answered in only one.

Network traffic visibility is considerably improved by reducing network blindspots and providing qualified sources and reasons of communications that impair business continuity.The capacity to capture flow at a finer level allows for new Predictive AI Baselining and Machine Learning application analysis and risk mitigation.

There are so many critical abilities that a network monitoring solution must enable its user, all are affected by whether or not the solution can scale.

Visibility is a range and not binary, you do not have or don’t have visibility, its whether you have enough to achieve your goals and keep your organization productive and safe.

How to Use a Network Behavior Analysis Tool to Your Advantage

How to Use a Network Behavior Analysis Tool to Your Advantage

Cybersecurity threats can come in many forms. They can easily slip through your network’s defenses if you let your guard down, even for a second. Protect your business by leveraging network behavior analysis (NBA). Implementing behavioral analysis tools helps organizations detect and stop suspicious activities within their networks before they happen and limit the damage if they do happen.

According to Accenture, improving network security is the top priority for most companies this 2021. In fact, the majority of them have increased their spending on network security by more than 25% in the past months. 

With that, here are some ways to use network behavior anomaly detection tools to your advantage.

1.     Leverage artificial intelligence

Nowadays, you can easily leverage artificial intelligence (AI) and machine learning (ML) in your network monitoring. In fact, various software systems utilize  AI diagnostics to enhance the detection of any anomalies within your network. Through its dynamic machine learning, it can quickly learn how to differentiate between normal and suspicious activities.

AI-powered NBA software can continuously adapt to new threats and discover outliers without much interference from you. This way, it can provide early warning on potential cyberattacks before they can get serious. This can include DDoS, Advanced Persistent Threats, and Anomalous traffic.

Hence, you should consider having AI diagnostics as one of your network behavior analysis magic quadrants.

2.           Take advantage of its automation

One of the biggest benefits of a network anomaly detection program is helping you save time and labor in detecting and resolving network issues. It is constantly watching your network, collecting data, and analyzing activities within it. It will then notify you and your network administrators of any threats or anomalies within your network.

Moreover, it can automatically mitigate some security threats from rogue applications to prevent sudden downtimes. It can also eliminate blind spots within your network security, fortifying your defenses and visibility. As a result, you or your administrators can qualify and detect network traffic passively.

3.           Utilize NBA data and analytics

As more businesses become data-driven, big data gains momentum. It can aid your marketing teams in designing better campaigns or your sales team in increasing your business’ revenues. And through network behavior analysis, you can deep-mine large volumes of data from day-to-day operations.

For security engineers, big data analytics has become an effective defense against network attacks and vulnerabilities. It can give them deeper visibility into increasingly complex and larger network systems. 

Today’s advanced analytics platforms are designed to handle and process larger volumes of data. Furthermore, these platforms can learn and evolve from such data, resulting in stronger network behavior analytics and local threat detection.

4.           Optimize network anomaly detection

A common issue with network monitoring solutions is their tendency to overburden network and security managers with false-positive readings. This is due to the lack of in-depth information to confirm the actual cause of a network issue. Hence, it is important to consistently optimize your network behavior analysis tool.

One way to do this is to use a flow-based analytics methodology for your network monitoring. You can do so with software like CySight, which uses artificial intelligence to analyze, segment, and learn from granular telemetry from your network infrastructure flows in real-time. It also enables you to configure and fine-tune your network behavior analysis for more accurate and in-depth monitoring.

5.           Integrate with other security solutions

Enhance your experience with your network behavior analytics tool by integrating it with your existing security solutions, such as prevention technology (IPS) systems, firewalls, and more. 

Through integrations, you can cross-analyze data between security tools for better visibility and more in-depth insights on your network safety. Having several security systems working together at once means one can detect or mitigate certain behaviors that are undetectable for the other. This also ensures you cover all the bases and leave no room for vulnerabilities in your network.

Improving network security

As your business strives towards total digital transformation, you need to start investing in your network security. Threats can come in many forms. And once it slips past your guard, it might just be too late.

Network behavior analysis can help fortify your network security. It constantly monitors your network and traffic and notifies you of any suspicious activities or changes. This way, you can immediately mitigate any potential issues before they can get out of hand. Check out CySight to know more about the benefits of network behavior analysis.

But, of course, a tool can only be as good as the people using it. Hence, you must make sure that you hire the right people for your network security team. Consider recruiting someone with an online software engineering masters to help you strengthen your network.


Ref: Accenture Report

Scalable NetFlow – 3 Key Questions to Ask Your NetFlow Vendor

Why is flows per second a flawed way to measure a netflow collector’s capability?

Flows-per-second is often considered the primary yardstick to measure the capability of a netflow analyzers flow capture (aka collection) rate.

This seems simple on its face. The more flows-per-second that a flow collector can consume, the more visibility it provides, right? Well, yes and no.

The Basics

NetFlow was originally conceived as a means to provide network professionals the data to make sense of the traffic on their network without having to resort to expensive per segment based packet sniffing tools.

A flow record contains at minimum the basic information pertaining to a transfer of data through a router, switch, firewall, packet tap or other network gateway. A typical flow record will contain at minimum: Source IP, Destination IP, Source Port, Destination Port, Protocol, Tos, Ingress Interface and Egress Interface. Flow records are exported to a flow collector where they are ingested and information orientated to the engineers purposes are displayed.

Measurement

Measurement has always been how the IT industry expresses power and competency. However, a formula used to reflect power and ability changes when a technology design undergoes a paradigm shift.

For example, when expressing how fast a computer is we used to measure the CPU clock speed. We believed that the higher the clock speed the more powerful the computer. However, when multi-core chips were introduced the CPU power and speed dropped but the CPU in fact became more powerful. The primary clock speed measurement indicator became secondary to the ability to multi-thread.

The flows-per-second yardstick is misleading as it incorrectly reflects the actual power and capability of a flow collector to capture and process flow data and it has become prone to marketing exaggeration.

Flow Capture Rate

Flow capture rate ability is difficult to measure and to quantify a products scalability. There are various factors that can dramatically impact the ability to collect flows and to retain sufficient flows to perform higher-end diagnostics.

Its important to look not just at flows-per-second but at the granularity retained per minute (flow retention rate), the speed and flexibility of alerting, reporting, forensic depth and diagnostics and the scalability when impacted by high-flow-variance, sudden-bursts, number of devices and interfaces, the speed of reporting over time, the ability to retain short-term and historical collections and the confluence of these factors as it pertains to scalability of the software as a whole.

Scalable NetFlow and flow retention rates are particularly critical to determine as appropriate granularity is needed to deliver the visibility required to perform Anomaly Detection, Network Forensics, Root Cause Analysis, Billing substantiation, Peering Analysis and Data retention compliance.

The higher the flows-per-second and the flow-variance the more challenging it becomes to achieve a high flow-retention-rate to archive and retain flow records in a data warehouse.

A vendors capability statement might reflect a high flows-per-second consumption ability but many flow software tools have retention rate limitations by design.

It can mean that irrespective of achieving a high flow collection rate the netflow analyzer might only be capable of physically archiving 500 flows per minute. Furthermore, these flows are usually the result of sorting the flow data by top bytes to identify Top 10bandwidth abusers. Netflow products of this kind can be easily identified because they often tend to offer benefits orientated primarily to identifying bandwidth abuse or network performance monitoring.

Identifying bandwidth abusers is of course a very important benefit of a netflow analyzer. However, it has a marginal benefit today where a large amount of the abuse and risk is caused by many small flows.

These small flows usually fall beneath the radar screen of many netflow analysis products.  Many abuses like DDoS, p2p, botnets and hacker or insider data exfiltration continue to occur and can at minimum impact the networking equipment and user experience. Lack of ability to quantify and understand small flows creates great risk leaving organizations exposed.

Scalability

This inability to scale in short-term or historical analysis severely impacts a flow monitoring products ability to collect and retain critical information required in todays world where copious data has created severe network blind spots.

To qualify if a tool is really suitable for the purpose, you need to know more about the flows-per-second collection formula being provided by the vendor and some deeper investigation should be carried out to qualify the claims.

 

With this in mind here are 3 key questions to ask your NetFlow vendor to understand what their collection scalability claims really mean:

  1. How many flows can be collected per second?

  • Qualify if the flows per second rate provided is a burst rate or a sustained rate.
  • Ask how the collection and retention rates might be affected if the flows have high-flow variance (e.g. a DDoS attack).
  • How is the collection, archiving and reporting impacted when flow variance is increased by adding many devices and interfaces and distinct IPv4/IPv6 conversations and test what degradation in speed can you expect after it has been recording for some time.
  • Ask how the collection and retention rates might change if adding additional fields or measurements to the flow template (e.g. MPLS, MAC Address, URL, Latency)
  • How many flow records can be retained per minute?

  • Ask how the actual number of records inserted into the data warehouse per minute can be verified for short-term and historical collection.
  • Ask what happens to the flows that were not retained.
  • Ask what the flow retention logic is. (e.g. Top Bytes, First N)
  • What information granularity is retained for both short-term and historically?
    • Does the datas time granularity degrade as the data ages e.g. 1 day data retained per minute, 2 days retained per hour 5 days retained per quarter
    • Can you control the granularity and if so for how long?

 

Remember – Rate of collection does not translate to information retention.

Do you know whats really stored in the software’s database? After all you can only analyze what has been retained (either in memory or on disk) and it is that information retention granularity that provides a flow products benefits.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Big Data – A Global Approach To Local Threat Detection

From helping prevent loss of life in the event of a natural disaster, to aiding marketing teams in designing more targeted strategies to reach new customers, big data seems to be the chief talking point amongst a broad and diverse circle of professionals.

For Security Engineers, big data analytcs is proving to be an effective defense against evolving network intrusions thanks to the delivery of near real-time insights based on high volumes of diverse network data. This is largely thanks to technological advances that have resulted in the capacity to transmit, capture, store and analyze swathes of data through high-powered and relatively low-cost computing systems.

In this blog, we’ll take a look at how big data is bringing deeper visibility to security teams as environments increase in complexity and our reliance on pervading network systems intensifies.

Big data analysis is providing answers to the data deluge dilemma

Large environments generate gigabytes of raw user, application and device metrics by the minute, leaving security teams stranded in a deluge of data. Placing them further on the back foot is the need to sift through this data, which involves considerable resources that at best only provide a retrospective view on security breaches.

Big data offers a solution to the issue of “too much data too fast” through the rapid analysis of swathes of disparate metrics through advanced and evolving analytical platforms. The result is actionable security intelligence, based on comprehensive datasets, presented in an easy-to-consume format that not only provides historic views of network events, but enables security teams to better anticipate threats as they evolve.

In addition, big data’s ability to facilitate more accurate predictions on future events is a strong motivating factor for the adoption of the discipline within the context of information security.

Leveraging big data to build the secure networks of tomorrow

As new technologies arrive on the scene, they introduce businesses to new opportunities – and vulnerabilities. However, the application of Predictive AI Baselining analytics to network security in the context of the evolving network is helping to build the secure, stable and predictable networks of tomorrow. Detecting modern, more advanced threats requires big data capabilities from incumbent intrusion prevention and detection (IDS\IPS) solutions to distinguish normal traffic from potential threats.

By contextualizing diverse sets of data, Security Engineers can more effectively detect stealthily designed threats that traditional monitoring methodologies often fail to pick up. For example, Advanced Persistent Threats (APT) are notorious for their ability to go undetected by masking themselves as day-to-day network traffic. These low visibility attacks can occur over long periods of time and on separate devices, making them difficult to detect since no discernible patterns arise from their activities through the lens of traditional monitoring systems.

Big data Predictive AI Baselining analytics lifts the veil on threats that operate under the radar of traditional signature and log-based security solutions by contextualizing traffic and giving NOCs a deeper understanding of the data that traverses the wire.

Gartner states that, “Big data Predictive AI Baselining analytics enables enterprises to combine and correlate external and internal information to see a bigger picture of threats against their enterprises.”  It also eliminates the siloed approach to security monitoring by converging network traffic and organizing it in a central data repository for analysis; resulting in much needed granularity for effective intrusion detection, prevention and security forensics.

In addition, Predictive AI Baselining analytics eliminates barriers to internal collaborations between Network, Security and Performance Engineers by further contextualizing network data that traditionally acted as separate pieces of a very large puzzle.

So is big data Predictive AI Baselining analytics the future of network monitoring?

In a way, NOC teams have been using big data long before the discipline went mainstream. Large networks have always produced high volumes of data at high speeds – only now, that influx has intensified exponentially.

Thankfully, with the rapid evolution of computing power at relatively low cost, the possibilities of what our data can tell us about our networks are becoming more apparent.

The timing couldn’t have been more appropriate since traditional perimeter-based IDS\IPS no longer meet the demands of modern networks that span vast geographical areas with multiple entry points.

In the age of cloud, mobility, ubiquitous Internet and the ever-expanding enterprise environment, big data capabilities will and should become an intrinsic part of virtually every security apparatus.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Microsoft Nobelium Hack

Solarwinds Hackers Strike Again

Another painful round of cyber-attacks carried out by what Microsoft discovered to be a Russian state-sponsored hacking group called Nobelium, this time attacking Microsoft support agent’s computer, exposing customer’s subscription information. 

The activity tracked by Microsoft led to Nobelium, the same group that executed the solarwinds orion hack last year December 2020. The attack was first discovered when an “information-stealing malware” on one of Microsoft’s customer support agent’s machine was detected by Microsoft themselves. Infiltration occurred using password spraying and brute force attacks, attempting to gain access to the Microsoft accounts.

Microsoft said Nobelium had targeted over 150 organizations worldwide in the last week, including government agencies, think tanks, consultants, and nongovernmental organizations, reaching over 3000 email accounts mostly in the USA but also present in at least 24 other countries. This event is said to be an “active incident”, meaning this attack is very much Live and more has yet to be discovered. Microsoft is attempting to notify all who are affected.

The attack carried out was done through an email marketing account belonging to the U.S Agency for International Development. Recipients of the email received a phishing email that looked authentic but contained a malicious file inserted into a link. Once the file was downloaded, the machine is compromised and a back door is created, enabling the bad actor to steal data along with infecting other machines on the network.

In April this year, the Biden administration pointed the finger at the Russian Foreign Intelligence Service (SVR) for being responsible for the solarwinds attack, exposing the Nobelium group. It appears that this exposure led the group to drop their stealth approach they have been using for months and on May 25 they ran a “spear phishing” campaign, causing a zero-day vulnerability.

Nobelium Phishing Attack

Staying in Control of your Network

IdeaData’s Marketing Manager, Tomare Curran, stated on the matter, “These kinds of threats can hide and go unnoticed for years until the botnet master decides to activate the malware. Therefore, it’s imperative to maintain flow metadata records of every transaction so that when a threat finally comes to light you can set Netflow Auditor’s HindSight Threat Analyzer to search back and help you find out if or when you were compromised and what else could have been impacted.”

NetFlow Auditor constantly keeps its eyes on your Network and provides total visibility to quickly identify and alert on who is doing What, Where, When, with Whom and for How Long right now or months ago. It baselines your network to discover unusual network behaviors and using machine learning and A.I. diagnostics will provide early warning on anomalous communications.

Cyber security experts at IdeaData do not believe the group will stop their operations due to being exposed. IdeaData is offering Netflow Auditor’s Integrated Cyber Threat Intelligence solution free for 60 days to allow companies to help cleanse their network from newly identified threats.

Have any questions?

Contact us at:  tomare.curran@netflowauditor.com

NetFlow for Advanced Threat Detection

These networks are vital assets to the business and require absolute protection against unauthorized access, malicious programs, and degradation of performance of the network. It is no longer enough to only use Anti-Virus applications.

By the time malware is detected and those signatures added to the antiviral definitions, access is obtained and havoc wreaked or the malware is buried itself inside the network and is obtaining data and passwords for later exploitation.

An article by Drew Robb in eSecurity Planet on September 3, 2015 (https://www.esecurityplanet.com/network-security/advanced-threat-detection-buying-guide-1.html) cited the Verizon 2015 Data Breach Investigations Report where 70 respondents reported over 80,000 security incidents which led to more than 2000 serious breaches in one year.

The report noted that phishing is commonly used to gain access and the malware  then accumulates passwords and account numbers and learns the security defenses before launching an attack.  A telling remark was made, “It is abundantly clear that traditional security solutions are increasingly ineffectual and that vendor assurances are often empty promises,” said Charles King, an analyst at Pund-IT. “Passive security practices like setting and maintaining defensive security perimeters simply don’t work against highly aggressive and adaptable threat sources, including criminal organizations and rogue states.”

So what can businesses do to protect themselves? How can they be proactive in addition to the passive perimeter defenses?

The very first line of defense is better education of users. In one test, an e-mail message was sent to the users, purportedly from the IT department, asking for their passwords in order to “upgrade security.” While 52 people asked the IT department if this was a real request, 110 mailed their passwords right back. In their attempts to be productive, over half of the recipients of phishing e-mails responded within an hour!

Another method of advanced threat protection is NetFlow Monitoring.

IT department and Managed service providers (MSP’s), can use monitoring capabilities to detect, prevent, and report adverse effects on the network.

Traffic monitoring, for example, watches the flow of information and data traversing critical nodes and network links. Without using intrusive probes, this information helps decipher how applications are using the network and which ones are becoming bandwidth hogs. These are then investigated further to determine what is causing the problem and how best to manage the issue. Just adding more bandwidth is not the answer!

IT departments review this data to investigate which personnel are the power users of which applications, when the peak traffic times are and why, and similar information in addition to flagging and diving in-depth to review anomalies that indicate a potential problem.

If there are critical applications or services that the clients rely on for key account revenue streams, IT can provide real-time monitoring and display of the health of the networks supporting those applications and services. It is this ability to observe, analyze, and report on the network health and patterns of usage that provides the ability to make better decisions at the speed of business that CIO’s crave.

CySight excels at network Predictive AI Baselining analytics solutions. It scales to collect, analyze, and report on Netflow datastreams of over one million flows/second. Their team of specialists have prepped, installed, and deployed over 1000 CySight performance monitoring solutions, including over 50 Fortune 1000 companies and some of the largest ISP/Telco’s in the world. A global leader and recognized by winning awards for Security and Business Intelligence at the World Congress of IT, IdeaData is also welcomed by Cisco as a Technology Development Partner.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

3 Ways Anomaly Detection Enhances Network Monitoring

With the increasing abstraction of IT services beyond the traditional server room computing environments have evolved to be more efficient and also far more complex. Virtualization, mobile device technology, hosted infrastructure, Internet ubiquity and a host of other technologies are redefining the IT landscape.

From a cybersecurity standpoint, the question is how to best to manage the growing complexity of environments and changes in network behavior with every introduction of new technology.

In this blog, we’ll take a look at how anomaly detection-based systems are adding an invaluable weapon to Security Analysts’ arsenal in the battle against known – and unknown – security risks that threaten the stability of today’s complex enterprise environments.

Put your network traffic behavior into perspective

By continually analyzing traffic patterns at various intersections and time frames, performance and security baselines can be established, against which potential malicious activity is monitored and managed. But with large swathes of data traversing the average enterprise environment at any given moment, detecting abnormal network behavior can be difficult.

Through filtering techniques and algorithms based on live and historical data analysis, anomaly detection systems are capable of detecting even the most subtly crafted malicious software that may pose as normal network behavior. Also, anomaly-based systems employ machine-learning capabilities to learn about new traffic as it is introduced and provide greater context to how data traverses the wire, thus increasing its ability to identify security threats as they are introduced.

Netflow is a popular tool used in the collection of network traffic for building accurate performance and cybersecurity baselines with which to establish normal network activity patterns from potentially alarming network behavior.

Anomaly detection places Security Analysts on the front foot

An anomaly is defined as an action or event that is outside of the norm. But when a definition of what is normal is absent, loopholes can easily be exploited. This is often the case with signature-based detection systems that rely on a database of pre-determined virus signatures that are based on known threats. In the event of a new and yet unknown security threat, signature-based systems are only as effective as their ability to respond to, analyze and neutralize such new threats.

Since signatures do work well against known attacks, they are by no means paralyzed against defending your network. Signature-based systems lack the flexibility of anomaly-based systems in the sense that they are incapable of detecting new threats. This is one of the reasons signature-based systems are typically complemented by some iteration of a flow based anomaly detection system.

Anomaly based systems are designed to grow alongside your network

The chief strength behind anomaly detection systems is that they allow Network Operation Centers (NOCs) to adapt their security apparatus according to the demands of the day. With threats growing in number and sophistication, detection systems that can discover, learn about and provide preventative methodologies  are the ideal tools with which to combat the cybersecurity threats of tomorrow. NetFlow Anomaly detection with automated diagnostics does exactly this by employing machine learning techniques to network threat detection and in so doing, automating much of the detection aspect of security management while allowing Security Analysts to focus on the prevention aspect in their ongoing endeavors to secure their information and technological investments.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Identifying ToR threats without De-Anonymizing

Part 3 in our series on How to counter-punch botnets, viruses, ToR and more with Netflow focuses on ToR threats to the enterprise.

ToR (aka Onion routing) and anonymized p2p relay services such as Freenet is where we can expect to see many more attacks as well as malevolent actors who are out to deny your service or steal your valuable data. Its useful to recognize that flow Predictive AI Baselining analytics provides the best and cheapest means of de-anonymizing or profiling this traffic.

“The biggest threat to the Tor network, which exists by design, is its vulnerability to traffic confirmation or correlation attacks. This means that if an attacker gains control over many entry and exit relays, they can perform statistical traffic analysis to determine which users visited which websites.” (source)

According to a paper entitled “On the Effectiveness of Traffic Analysis Against Anonymity Networks Using Flow Records” by Sambuddho Chakravarty, Marco V. Barbera,, Georgios Portokalidis, Michalis Polychronakis, and Angelos D. Keromytis they point out that in the lab they can qualify that “81 Percent of Tor Users Can Be Hacked with Traffic Analysis Attack”.

It continues to be a cat and mouse game that requires both new innovative approaches to find ToR weaknesses coupled with correlation attacks to identify routing paths. To do this in real life is becoming much simpler but the real challenge is that it requires cooperation and coordination of business, ISPs and governments. The deployment of cheap and easy to deploy micro-taps that can act both as a ToR relay and a flow exporter concurrently combined with a NetFlow toolset that can scale hierarchically to analyze flow data with path analysis at each point in parallel across a multitude of ToR relays can make this task easy and cost effective.

So what can we do about ToR today?

Even without de-anonymizing ToR traffic there is a lot of intelligence that can be gained simply by analyzing ToR Exit and relay behavior. Using a flow tool that can change perspectives between flows, packets, bytes, counts or tcp flag counts allows you to qualify if a ToR node is being used to download masses of data or is trickling out data.

Patterns of data can be very telling as to what is the nature of the data transfer and can be used in conjunction with other information to become a useful indicator of the risk. As for supposedly secured networks I can’t think of any instance where ToR/Onion routing or for that matter any external VPN or Proxy service is needed to be used from within what is supposed to be a locked environment. Once ToR traffic has been identified communicating in a sensitive environment it is essential to immediately investigate and stop the IP addresses engaging in this suspicious behavior.

Using a tool like CySight’s advanced End-Point Threat Detection allows NetFlow data to be correlated against hundreds of thousands of IP addresses of questionable reputation including ToR exits and relays in real-time with comprehensive historical forensics that can be deployed in a massively parallel architecture.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility