When small organizations grow into enterprises, they also grow their branches..Literally..Well, atleast as remote sites, branch offices and DR centers. With current scenario defining cost saving as a primary factor for growth, is it affordable to have IT staff at all the remote locations? Having IT staff form monitoring the traffic at DR centers and major branches is justified, but not at the sites having just a couple of switching and routing devices.

The best option that comes to the forefront is NetFlow. NetFlow technology has the ability to give highly granular reports and with almost all major vendors and a major series of devices supporting NetFlow or similar flow formats, there is no need to add additional hardware at extra cost which again leads to cost saving. All you need is a software that can collect the flow packets and generate the reports. Here again comes other questions. How can you collect flows from the devices in various branch offices spread globally? If you already have a NetFlow tool deployed, will it scale up to handle the thousands of interfaces and flow rate of 40,000 to 60,000 flows per second? Along with the need for monitoring remote locations with detailed reports, there are also needs for features that cater to specificities for branched networks like time zone based view. Can this be provided by the existing tool?

Now, even if your existing application can do all this, questions arise on the feasibility of sending a large volume of data over valuable Internet links. The priority is always to save the available Internet bandwidth for business critical applications. To make the monitoring easier, enterprises even try deploying different instances of the same tool at the branches. But this does not help. The job of logging to separate installations to check the status of multiples links, generating reports for each interfaces which then have to be consolidated and etc is a daunting task.

In such a scenario NetFlow Analyzer Enterprise edition with its distributed flow collector and central server is the best suitable solution. The Enterprise edition of NetFlow Analyzer has flow collectors which can be deployed at various branches or geographic locations. The devices at the branches or a site can send flows to the collectors. The collectors will then collect the flows, compress them and then send it over HTTPS (Yes! Security for valuable data) to the central server.

The central server is from where all the reporting and analysis takes place. The central server collects data from the collectors, process them and stores it to the database from where reports are generated. You get real time visibility into the usage statistics about various links from globally spread branches in a single console.

Distributed architecture

The distributed flow collection and reporting engine gives the Enterprise edition capability to monitor up to 20,000 interfaces and flow rate in the range of even 60,000 flows per second. This rules out scalability and performance related issues that might have other wise come up with a integrated application trying to handle a large number of interfaces and high flow rate. The features available in this edition are also exactly what a distributed setup needs.

Tree view for devices helps group devices based on their locations (or your preferred criteria) for easier selection by users. This way, users do not have to search through the complete list of devices to find the one for which bandwidth metrics are needed. Timezone based view lets the users see reports in the time zone the device is at rather than based on the time where the product is installed. Administrators can also create multiple user accounts, assign devices or IP Groups to them and also set what timezone the users view the reports in. Do visit here to view the complete list of features available in Enterprise edition.

You can also leave behind your worries about exported NetFlow packets using a large volume of the Internet bandwidth. The NetFlow data is compressed using Java technology before being send from the collector to the central server. This brings down the volume of the exported NetFlow data to less than 20% of the actual size and helps save your valuable Internet bandwidth for critical applications. Moreover, since data is send over HTTPS connection, the NetFlow data is secure and even the GUI of both the collector and central server have HTTPS enabled by default.

Now with the central console, reports from the branches and DR sites spread geographically are at hand. There is no more need to login into different installations and have reports generated from each one of them separately. You also have the option to select the interfaces displayed in the dashboard and so at a single glance the network team gets to see the status of highly utilized links or the status of critical links.

All enterprises preferred uninterrupted monitoring and reporting of critical links, applications or servers. But when the need comes to shut down the central server for maintenance or if the central server is down inadvertently, what can be done? The failover is the perfect feature for this. The data stored in the central server is replicated to a secondary central server and any time the primary server goes down, the secondary is automatically activated after a fixed time. Thus the fail over gives you a automatic backup and redundancy of data.

With all these features and its scalability, NetFlow Analyzer Enterprise edition is the best suitable solution for bandwidth monitoring and traffic analysis. Do download the Central server and Collector from here and start your 30 day evaluation with free technical support from our team.

Regards,
Don Thomas Jacob