Page 166 - DCAP207_NETWORKS_DCAP406_COMPUTER_NETWORKS
P. 166

Unit 10: Network Layer in the Internet




          10.3.2 QoS Concepts                                                                   Notes

          Congestion Management: The bursty nature of data traffic, sometimes bounds to increase the
          amount of traffic more than the speed of a link. In such a situation, QoS enables a router to put
          packets into different queues and service certain queues more often based on priority rather
          than buffer traffic in a single queue and let the first packet in be the first packet out. Such issues
          are incorporated in congestion-management tools to handle. Thus, the congestion management
          tool may include priority queuing, custom queuing, weighted fair queuing, etc.
          Queue Management: The queues in a buffer may fill and overflow. A packet will be dropped if
          a queue is full and router cannot prevent this packet from being dropped even if it is a high-
          priority packet. This is referred as tail drop. It could be prevented either by ensuring that the
          queue does not fill and provide room for high-priority packets or allow some rule for dropping
          packets with  lower priority  before dropping  higher-priority packets.  A  mechanism  called
          weighted early random detect perform both of these functions.
          Link Efficiency: Sometimes,  the low-speed  links  are bottlenecks  for  smaller  packets. The
          serialization delay caused by the large packets force the smaller packets to wait longer.  The
          serialization delay is the time taken to put a packet on the link. The serialization delay (For
          example, the serialization delay for a 2400-byte packet on a 56-kbps link will be 343 milliseconds)
          will make a voice packet, which is behind it in queue to delay enormously before the packet left
          the router, a situation, which is not desirable  for voice packets. The link fragmentation and
          interleave process segment large packet into smaller packets interleaving the voice packet.

          Elimination of overhead bits: The efficiency could also be improved by eliminating too many
          overhead bits. For example, RTP headers have a 40-byte header and a payload of as little as 20
          bytes. In such a case, the overhead is twice that of the payload. Some compression technique may
          be applied to reduce the header to a more manageable size.
          Traffic shaping and policing:  Shaping is used to prevent the overflow problem in buffers by
          limiting the full bandwidth potential of the packets of applications. Sometimes, in many network
          topologies that has a high-bandwidth link connected with a low-bandwidth link in remote sites
          may overflow low bandwidth link. Hence, shaping is used to provide the traffic flow from high
          bandwidth link closer to the low bandwidth link to avoid the overflow of the low bandwidth
          link. Policing is used to discard the traffic that exceeds the configured rate but in case of shaping
          it is buffered.

          End-to-end QoS Levels

          It refers to the capability of a network to deliver service needed by specific network traffic from
          end  to  end  or  edge to  edge under  network constraints  like  bandwidth,  delay,  jitter,  loss
          characteristics, etc. These factors describe how tightly the end-to-end service performs. The QoS
          involves a  policy framework or set of rules that designate an action. The policy  framework
          provides a particular service to particular client, application and schedule. Three basic levels of
          end-to-end QoS can be provided across a heterogeneous network. They are integrated service,
          differentiated service and inbound admission service types.
          Performance Limits: The performance limits also considers the token bucket limits and bandwidth
          limits together to guarantee packet delivery in outbound bandwidth policies for the integrated
          and differentiated service.
          Token Bucket Size: When an application is sending information faster than the server sends the
          data out of the network, the buffer will fill up. To avoid such situations, the token bucket size is
          applied to determine the amount of information a server can process at any given time. A packet





                                           LOVELY PROFESSIONAL UNIVERSITY                                   159
   161   162   163   164   165   166   167   168   169   170   171