Wednesday, 4 November 2015

5 Tips To Choose The Right Bandwidth Management Solutions

www.xsinfosol.com
Scalability - 

Scalability is the biggest challenge in bandwidth management, could bandwidth management box handle multiple gigabit or 10Gb/s links with QoS rule sets? Few bandwidth management vendors have the 10G solutions. At the moment, any switches or routers could handle 10Gb/s, so what is special with bandwidth management? Because switching has very little logic management while bandwidth management got very complex logic and you can take complex software and siliconize it on ASIC chip.

Most bandwidth management vendors develops its solutions based on *nix, which needs to be substantially improved from ground up in terms of performance, including SMP, NIC drivers, Network stack to better utilize the multi-core platform to avoid the locking as much as possible, you cannot expect open source stuff to scale well, do you see any instance that dummynet could work well under 400K pps load? Definitely not.

For internet network service providers, it is critical to have scalable bandwidth management solutions while its uplink grows rapidly. Most bandwidth management vendors could not even handle 500Mb/s link with QoS rule sets but they advertised 1Gb/s or more.

The conclusion: Make sure you test before purchase, you have to load the bandwidth management box on your live network with QoS rule sets to check if it introduces the latency, packet loss and see if its CPU usage is more than 50%.

Bandwidth management philosophy

Over the past few years we have seen a massive explosion in the types of traffic and applications that traverse IP networks. There are good sound reasons for this as globally we take full advantage of the technologies at hand in our every day lives - however, as can be expected this does unfortunately have some negative side effects especially on bandwidth consumption capacities and the resultant degradation.

This is why bandwidth management solutions come in, so the question is how to manage bandwidth.

Deep Packet Inspection


Deep packet inspection (DPI) is Application-based traffic optimization, which uses the properties of each network protocol to provide the minimum bandwidth that guarantees acceptable quality. Bulk file transfer applications are given the lowest priority since they are typically non-interactive and long-lived. For example, a one way bulk interactive application such as a file download would be lowest priority, a one-way streaming media like YouTube ® may be next in priority and an interactive application such as VoIP would have the highest priority. As the network becomes heavily congested this prioritization becomes important as each application is degraded if it is not prioritized.

Internet standards have anticipated that ‘differentiated services' would be offered, where applications ‘mark' themselves into the appropriate class based on the priority need of their

packets. For example, VoIP marks itself as a high priority given its real-time bandwidth need and a file download marks itself at a lower priority. This provides priority for real-time

applications and prevents larger applications from dominating the network. This method, however, is flawed when used in a consumer access application. Broadband access networks

(DOCSIS, DSL) do not support ‘differentiated services' due to technological limitations. Additionally, differentiated services lead to a fairness issue between subscribers and an

incentive to ‘cheat', causing the theft of QoS. Application writers sometimes marked their application's packets as the highest priority and this honor system failed.

Service provides have resorted to marking the traffic on behalf of the user, automatically choosing the guarantees that were needed. This application optimization delivers excellent overall quality and subscriber satisfaction.

However, DPI is fundamentally flawed for Internet network services providers:

To control user activity it requires many rules and DPI for application recognition. However policies based on explicitly having to identify the application are problematic as there is always going to be unidentified traffic as signatures change or worse still traffic becomes encrypted. This traffic is then thrown into an "all other " classification and managed in a single umbrella rule. It also implies the endless maintenance and application signature upgrade cost.

Multiple traffic types some good some bad having to compete for restricted bandwidth. There are many legal forms of p2p downloading as well which get restricted by these general catch all shaping rules.

The protocol method fails because it doesn't account for the one component of bandwidth management that matters most: volume. The reason that P2P protocols are considered abusive is because they are automated. What most people don't understand is that most of the traffic generated by P2P applications is Web and ICMP traffic. Directory contents are exchanged with Web and servers are discovered with ICMP. The reason its abusive is not because of file downloads; it is abusive because the application is automated; it is generating traffic with a volume that is the equivalent of 100s of users. A protocol method that defines Web traffic as a good protocol will not work as expected, because these applications increase the volume of Web to the point where the network's volume of Web traffic is so high that you either have congestion, or you have to limit users who are innocently surfing the web. The protocol method is a losing battle that fails to solve the problem of network congestion.

The biggest problem with DPI is that it is easily defeated. The first way to defeat it is to make your protocols complicated, and to change them regularly. The P2P people do this with fervor. A way to absolutely defeat it is with encryption. How can you inspect a packet when you can't determine the contents? The truth is, you can't. You don't even have to use encryption; you can just scramble your headers or use variable codes. Bandwidth management box on high speed networks don't have the CPU capacity to be trying to decrypt thousands of packets per second. And you don't have to be an evil genius to defeat DPI; it can happen accidentally. For Example, IPSEC traffic can't be managed with DPI or the protocol method. P2P applications can easily launch encrypted tunnels to defeat any control attempt by upstream bandwidth management box.

Per-user management


Most ISPs and Universities are interested in providing fair access to bandwidth for its customers and users. The way to provide per user fairness is to manage by user. The power of per-user management is that you do not care what they're doing. You do not have to know about every protocol ever conceived. And you do not have to restrict access to some protocols altogether, since any customer running abusive protocols will only consume their own bandwidth. You do not need to upgrade every time something changes, and you do not need to buy expensive support. Per user controls also can't be defeated. Since you are controlling by Address or range of Addresses, tunneling, encryption, and header scrambling can not be used to get around your controls. The customer/user has no choice but to use their assigned address, so you can always identify their traffic, and can manage the volume of their traffic as a single, simple, easily manageable entity.

The most productive and profitable way for service providers to generate revenue streams is to sell raw bandwidth with the highest possible efficiency. When service providers start trying to micro-manage user's traffic they are just opening up a Pandora's Box of problems. Large service providers can not recruit enough talent to manage these services, educate customers and deal with customers whose expectations are well beyond what the service provider can deliver. Selling raw, tiered service allows service providers to streamline their operations and to minimize the interaction with day-to-day issues with customer problems. It allows them to have easy to understand services that are easy to provide with minimal staff. It pushes the responsibility of micro-management to the end user, where it is easier to do, and where dedicated staff becomes more cost effective.

There are legal concerns about providers dictating what customers can do on the internet, and even if your controls pass the legal test, there is public outcry about providers claiming to sell raw internet access and then not allowing "certain" kinds of traffic to pass. Using a per-user approach makes your controls transparent, as there is no limitation on what a customer can do, as long as they do not exceed their fair share of bandwidth.

The conclusion is that while Deep Packet Inspection presentations include nifty graphs and seemingly exciting possibilities; it is only effective in streamlining small, very predictable networks. The basic concept is fundamentally flawed. The problem with large networks is not that bandwidth needs to be shifted from "bad" protocols to "good" protocols. The problem is volume. Volume must be managed in a way that maintains the strategic goals of the network administration. Almost always this can be achieved with a macro approach of allocating a fair share to each entity that uses the network. Any

attempt to micro-manage large networks usually makes them worse; or at least simply results in shifting bottlenecks from one thing to another.

Underlying Technologies

Understanding how technologies in bandwidth management work is extremely important when selecting a product for your network.

To begin, let us assume we has one box that sits between an internet connection and some network, and that the purpose of the box is to somehow affect the flow of traffic to and from the internet. The box has two ports, one going to an internet router and another connected to a switch that services any number of networks within the "intranet". All traffic must flow through the box in order to get from the internet to the intranet or from the intranet to the internet. Hence, every data frame can potentially be affected by the box.

Normally, without any sort of bandwidth management in place, data frames are passed through the box as quickly as possible. Data frames come in from the Internet and are passed as quickly as possible to the port connected to the intranet, and vice versa. This is how your typical router or switch functions.

Now let us compare how some bandwidth management technologies running on the box work.

Queuing Algorithms

Normally, all data to be sent is put into what is called a "queue". Since the connection on one side of the Box may be faster than the connection on the other side, data frames may arrive on one side of the box faster than the other. You do not want the Box to throw away the data, so it is put into a "queue" until it can be processed by the slower interface. The data in the queue is then processed sequentially on a first-come, first-serve basis. Typically, it is sent out as fast as the target medium allows, which facilitates the best possible throughput with the lowest possible latency.

CBQ, the most popular of the techniques and the one used in most low-end bandwidth management products (like microtik), stands for "class based queuing". It is a fairly simple technique where data is categorized into user-defined "classes" and then the queues are maintained for each class. Data can then be sent out according to time schedules and prioritized by processing the queues at specific intervals and/or in order or priority. However, in order to "reorder" data frames according to specified priorities, data frames are forced into "class" queues and processed at specific intervals, being sent in order of priority. The purpose of this is to assure that higher priority data frames will always be sent out before lower priority frames. Therefore, high priority data cannot be bottlenecked by anything with a lower priority.

The negatives of CBQ is that it introduces latencies (delays) into virtually all traffic that is being managed and that it is mono-directional in that only outgoing traffic can be controlled. It is also not practical to have a very large number of classes (queues) due to excessive overhead and loss of precision, so controlling hundreds of streams (or hosts, as an ISP might want to) will not work well. CBQ processes the defined class queues in a round-robin fashion, thus as the number of defined classes increases the efficiencies of the management decrease.

CBQ works best in a corporate environment where the user has control of both ends of the link, and where there are a few identifiable types of traffic that need to be managed.

HTB, the latest "craze" in the Linux camp, is "yet another" queuing technique with the same problems: it doesn't scale well and it is still a queue-based model. HTB addresses some of the precision issues with queuing algorithms, but it is simply not a technique that you can count on for the long run to manage a large network.

TCP Rate Limiting and Window Manipulation

It is a technique that "paces" traffic by "faking out" the transmitter by artificially changing the TCP window and pacing ACKs, effectively throttling the traffic.

TCP rate limiting is effective in "shaping" traffic and reducing to amount of traffic that needs to be queued and managed. It reduces flows and improves overall network performance through your network.

This is the only natural way to reduce the number of packets on the network at any given time and therefore reduce congestion allowing higher priority traffic a free passage through the network.

The "big picture" goal of any good bandwidth management strategy is to change the way traffic flows through your network in such a way that congestion is eliminated. Think of the situation when you have more bandwidth than you need. Suppose you have a 100Mb/s pipe and a PC that can only pull down 10Mb/s. Traffic flows freely through your router, it gets sent out as soon as it arrives. You don't need bandwidth management. You do not need to "prioritize" anything, because there is no backup. There is nothing to re-order.

Much of today's congestion is due to larger TCP windows being used in client systems. The larger the window, the more congestion you have. The more sessions, the more congestion. The more congestion you have, the more delays you have, and the more difficult it is to manage your bandwidth. So the most important function that the bandwidth management device must do is TCP window shaping. Without window shaping you CANNOT reduce the amount of traffic in your router's queues, so the best you can do is shift delays from one user to another. Most products on the market do not properly window shape to reduce congestion.

The conclusion is that although TCP rate limiting got its own fundamental drawbacks, virtually any bandwidth management solutions without implementation of TCP window manipulation can NOT truly reduce queue depth and optimize the traffic flow within your network.

Traffic Shaping Features

Bandwidth management box is just a tool, you will definitely need an effective strategy to do the bandwidth management right, and bandwidth management box is just to help to implement the strategy.

Strategy

No matter what you use your network for, the real goal is almost always the same. In reality everyone has the same goal: to make your network run smoothly without too many restrictions. Consider the case when you have "more than enough" bandwidth to do anything you need to do without any problems. Things work great. You do not have to examine your usage constantly to see what "the problem" is. You do not have to "catch" anyone doing something that you did not anticipate. You do not have to run into the office in the middle of the night because your network is so slow that your customers can't even get their mail. This IS the primary goal of bandwidth management. If you have no congestion, you have no problems.

Features
At minimum you will need a bandwidth management solution that has the following features:

• Can handle your traffic levels with a policy for every one of your customers/users

• Implements window shaping in order to reduce the overall amount of traffic that needs to be managed

• Has flexible bursting controls so that end user performance can be maximized and bandwidth can be dynamically shared

The conclusion is that you need the core features what a bandwidth management solution is supposed to have, all the others are just add-on.

Integration

In most cases, bandwidth management box is deployed in bridge mode, which works as a transparent MAC layer bridge. This implies lots of things to be considered in terms of integration.

The following is the checklist to be considered:

Uplink bandwidth counting consistency


For Internet network service providers, Ethernet technology evolved into Fast Ethernet and Gigabit Ethernet, and continues evolving, in modern networks more and more network operators prefer using Fast/Gigabit Ethernet as a technology for WAN connections.

So what about your uplink ISP counts the bandwidth including both data and Ethernet headers(14 bytes) while your bandwidth management box only counts the data portion? It means the inconsistency in your uplink ISP and your bandwidth management box, which is the disaster for billing.

MPLS

Multi Protocol Label Switching (MPLS) originated from "Tag Switching" a proprietary Cisco development. The technology was originally developed as a mechanism to improve the performance of core Routers. Today those efficiencies gained in core router performance have been negated due to vastly improved hardware technology, however the benefits of MPLS as a service prevail.

Why do organizations elect to implement an MPLS wide area network? In ninety percent of cases it is down to one thing alone, Quality of Service (QoS). MPLS enables the consolidation of applications onto a single network whilst providing the mechanism to prioritize the latency of individual applications within Application Classes. Organizations can optimize their wide area network usage based upon the types of applications communicating across it. The number of application classes varies upon the implementation offered by the service provider but is typically acknowledged as being 3. Each class has a different priority e.g. high priority is for the traffic that requires the lowest latency such as VOIP, medium priority for business critical applications that are not so latency critical and low for those that are unclassified.

Organizations purchase an MPLS service as a base rental cost with supplements proportional to their specified bandwidth for each application class. In return the service provider will provide a performance SLA for each application class.

When deployed bandwidth management box inside the MPLS path, at the very least it should support inspection of IP addresses in MPLS-encapsulated IP packets. This makes bandwidth management in an MPLS path impossible otherwise traffic just goes through not adhering to QoS rules.

Further it will be definitely better if bandwidth management box could add granularity to the bursting process allowing one to choose which applications can dynamically burst in order of priority into remaining unused bandwidth in different classes as Some MPLS providers do support dynamic bursting between classes.

Integration with other software


It includes billing software, monitoring software, Web proxy server etc., it implies that the bandwidth management solution support data interactive with those software, through database, API, SNMP etc.

The conclusion is that the integration is the important factor taking the nature of bandwidth management solution. So it is good idea to get the bottom of integration mechanisms provided by the bandwidth management solutions before purchase.

Conclusion

Bandwidth management is very complex, but no job is too hard when you have the right tool. For Internet network service providers, choosing the right bandwidth management solution will determine how well you go.

By evaluating the 5 factors in this paper should help you choose the right bandwidth management solution.

Source Page :- http://www.evancarmichael.com/library/delbert-terry/5-tips-to-choose-the-right-bandwidth-management-solutions.html
Post a Comment