Open In App

Performance of a Network

The performance of a network pertains to the measure of service quality of a network as perceived by the user. There are different ways to measure the performance of a network, depending upon the nature and design of the network. Finding the performance of a network depends on both quality of the network and the quantity of the network.

Parameters for Measuring Network Performance

BANDWIDTH

One of the most essential conditions of a website’s performance is the amount of bandwidth allocated to the network. Bandwidth determines how rapidly the webserver is able to upload the requested information. While there are different factors to consider with respect to a site’s performance, bandwidth is every now and again the restricting element.



Bandwidth is characterized as the measure of data or information that can be transmitted in a fixed measure of time. The term can be used in two different contexts with two distinctive estimating values. In the case of digital devices, the bandwidth is measured in bits per second(bps) or bytes per second. In the case of analog devices, the bandwidth is measured in cycles per second, or Hertz (Hz).

Bandwidth is only one component of what an individual sees as the speed of a network. People frequently mistake bandwidth with internet speed in light of the fact that Internet Service Providers (ISPs) tend to claim that they have a fast “40Mbps connection” in their advertising campaigns. True internet speed is actually the amount of data you receive every second and that has a lot to do with latency too. “Bandwidth” means “Capacity” and “Speed” means “Transfer rate”.



More bandwidth does not mean more speed. Let us take a case where we have double the width of the tap pipe, but the water rate is still the same as it was when the tap pipe was half the width. Hence, there will be no improvement in speed. When we consider WAN links, we mostly mean bandwidth but when we consider LAN, we mostly mean speed. This is on the grounds that we are generally constrained by expensive cable bandwidth over WAN rather than hardware and interface data transfer rates (or speed) over LAN.

Note: There exists an explicit relationship between the bandwidth in hertz and the bandwidth in bits per second. An increase in bandwidth in hertz means an increase in bandwidth in bits per second. The relationship depends upon whether we have baseband transmission or transmission with modulation. 

LATENCY 

In a network, during the process of data communication, latency(also known as delay) is defined as the total time taken for a complete message to arrive at the destination, starting with the time when the first bit of the message is sent out from the source and ending with the time when the last bit of the message is delivered at the destination. The network connections where small delays occur are called “Low-Latency-Networks” and the network connections which suffer from long delays are known as “High-Latency-Networks”. 

High latency leads to the creation of bottlenecks in any network communication. It stops the data from taking full advantage of the network pipe and conclusively decreases the bandwidth of the communicating network. The effect of the latency on a network’s bandwidth can be temporary or never-ending depending on the source of the delays. Latency is also known as a ping rate and is measured in milliseconds(ms). 

Latency = Propagation Time + Transmission Time + Queuing Time + Processing Delay

Propagation Time

It is the time required for a bit to travel from the source to the destination. Propagation time can be calculated as the ratio between the link length (distance) and the propagation speed over the communicating medium. For example, for an electric signal, propagation time is the time taken for the signal to travel through a wire.  

Propagation time = Distance / Propagation speed

Example:  

Input: What will be the propagation time when the distance between two points is 12, 000 km? 
       Assuming the propagation speed to be 2.4 * 10^8 m/s in cable.

Output: We can calculate the propagation time as-
        Propagation time = (12000 * 10000) / (2.4 * 10^8) = 50 ms 

Transmission Time

Transmission Time is a time based on how long it takes to send the signal down the transmission line. It consists of time costs for an EM signal to propagate from one side to the other, or costs like the training signals that are usually put on the front of a packet by the sender, which helps the receiver synchronize clocks. The transmission time of a message relies upon the size of the message and the bandwidth of the channel.  

Transmission time = Message size / Bandwidth

Example:  

Input: What will be the propagation time and the transmission time for a 2.5-kbyte 
       message when the bandwidth of the network is 1 Gbps? Assuming the distance between
       sender and receiver is 12, 000 km and speed of light is 2.4 * 10^8 m/s.

Output: We can calculate the propagation and transmission time as-
        Propagation time = (12000 * 10000) / (2.4 * 10^8) = 50 ms
        Transmission time = (2560 * 8) / 10^9 = 0.020 ms

Note: Since the message is short and the bandwidth is high, the dominant factor is the
      propagation time and not the transmission time(which can be ignored).

Queuing Time

Queuing time is a time based on how long the packet has to sit around in the router. Quite frequently the wire is busy, so we are not able to transmit a packet immediately. The queuing time is usually not a fixed factor, hence it changes with the load thrust in the network. In cases like these, the packet sits waiting, ready to go, in a queue. These delays are predominantly characterized by the measure of traffic on the system. The more the traffic, the more likely a packet is stuck in the queue, just sitting in the memory, waiting. 

Processing Delay

Processing delay is the delay based on how long it takes the router to figure out where to send the packet. As soon as the router finds it out, it will queue the packet for transmission. These costs are predominantly based on the complexity of the protocol. The router must decipher enough of the packet to make sense of which queue to put the packet in. Typically the lower-level layers of the stack have simpler protocols. If a router does not know which physical port to send the packet to, it will send it to all the ports, queuing the packet in many queues immediately. Differently, at a higher level, like in IP protocols, the processing may include making an ARP request to find out the physical address of the destination before queuing the packet for transmission. This situation may also be considered as a processing delay. 

BANDWIDTH – DELAY PRODUCT 

Bandwidth and Delay are two performance measurements of a link. However, what is significant in data communications is the product of the two, the bandwidth-delay product. Let us take two hypothetical cases as examples. 

Case 1: Assume a link is of bandwidth 1bps and the delay of the link is 5s. Let us find the bandwidth-delay product in this case. From the image, we can say that this product 1 x 5 is the maximum number of bits that can fill the link. There can be close to 5 bits at any time on the link. 

Bandwidth Delay Product

Case 2: Assume a link is of bandwidth 3bps. From the image, we can say that there can be a maximum of 3 x 5 = 15 bits on the line. The reason is that, at each second, there are 3 bits on the line and the duration of each bit is 0.33s. 

Bandwidth Delay

For both examples, the product of bandwidth and delay is the number of bits that can fill the link. This estimation is significant in the event that we have to send data in bursts and wait for the acknowledgment of each burst before sending the following one. To utilize the maximum ability of the link, we have to make the size of our burst twice the product of bandwidth and delay. Also, we need to fill up the full-duplex channel. The sender ought to send a burst of data of (2*bandwidth*delay) bits. The sender at that point waits for the receiver’s acknowledgement for part of the burst before sending another burst. The amount: 2*bandwidth*delay is the number of bits that can be in transition at any time. 

THROUGHPUT 

Throughput is the number of messages successfully transmitted per unit time. It is controlled by available bandwidth, the available signal-to-noise ratio, and hardware limitations. The maximum throughput of a network may be consequently higher than the actual throughput achieved in everyday consumption. The terms ‘throughput’ and ‘bandwidth’ are often thought of as the same, yet they are different. Bandwidth is the potential measurement of a link, whereas throughput is an actual measurement of how fast we can send data. 

Throughput is measured by tabulating the amount of data transferred between multiple locations during a specific period of time, usually resulting in the unit of bits per second(bps), which has evolved to bytes per second(Bps), kilobytes per second(KBps), megabytes per second(MBps) and gigabytes per second(Gbps). Throughput may be affected by numerous factors, such as the hindrance of the underlying analog physical medium, the available processing power of the system components, and end-user behavior. When numerous protocol expenses are taken into account, the use rate of the transferred data can be significantly lower than the maximum achievable throughput.

Let us consider: A highway that has a capacity of moving, say, 200 vehicles at a time. But at a random time, someone notices only, say, 150 vehicles moving through it due to some congestion on the road. As a result, the capacity is likely to be 200 vehicles per unit time and the throughput is 150 vehicles at a time. 

Example:

Input: A network with bandwidth of 10 Mbps can pass only an average of 12, 000 frames 
       per minute where each frame carries an average of 10, 000 bits. What will be the 
       throughput for this network?

Output: We can calculate the throughput as-
        Throughput = (12, 000 x 10, 000) / 60 = 2 Mbps
        The throughput is nearly equal to one-fifth of the bandwidth in this case.

For more, refer to the Difference between Bandwidth and Throughput.

JITTER 

Jitter is another performance issue related to the delay. In technical terms, jitter is a “packet delay variance”. It can simply mean that jitter is considered a problem when different packets of data face different delays in a network and the data at the receiver application is time-sensitive, i.e. audio or video data. Jitter is measured in milliseconds(ms). It is defined as an interference in the normal order of sending data packets. For example: if the delay for the first packet is 10 ms, for the second is 35 ms, and for the third is 50 ms, then the real-time destination application that uses the packets experiences jitter. 

Simply, a jitter is any deviation in or displacement of, the signal pulses in a high-frequency digital signal. The deviation can be in connection with the amplitude, the width of the signal pulse, or the phase timing. The major causes of jitter are electromagnetic interference(EMI) and crosstalk between signals. Jitter can lead to the flickering of a display screen, affects the capability of a processor in a desktop or server to proceed as expected, introduce clicks or other undesired impacts in audio signals, and loss of transmitted data between network devices. 

Jitter is harmful and causes network congestion and packet loss.  

Jitter

In the above image, it can be noticed that the time it takes for packets to be sent is not the same as the time in which they will arrive at the receiver side. One of the packets faces an unexpected delay on its way and is received after the expected time. This is jitter. 

A jitter buffer can reduce the effects of jitter, either in a network, on a router or switch, or on a computer. The system at the destination receiving the network packets usually receives them from the buffer and not from the source system directly. Each packet is fed out of the buffer at a regular rate. Another approach to diminish jitter in case of multiple paths for traffic is to selectively route traffic along the most stable paths or to always pick the path that can come closest to the targeted packet delivery rate.

Factors Affecting Network Performance

Below mentioned are the factors that affect the network performance.

Network Infrastructure

Network Infrastructure is one of the factors that affect network performance. Network Infrastructure consists of routers, switches services of a network like IP Addressing, wireless protocols, etc., and these factors directly affect the performance of the network.

Applications Used in the Network

Applications that are used in the Network can also have an impact on the performance of the network as some applications that have poor performance can take large bandwidth, for more complicated applications, its maintenance is also important and therefore it impacts the performance of the network.

Network Issues

Network Issue is a factor in Network Performance as the flaws or loopholes in these issues can lead to many systemic issues. Hardware issues can also impact the performance of the network.

Network Security

Network Security provides privacy, data integrity, etc. Performance can be influenced by taking network bandwidth which has the work of managing the scanning of devices, encryption of data, etc. But these cases negatively influence the network.

FAQs

1. How is the network performance measured?

Answer:

Network Performance is measured in two ways: Bandwidth and Latency.

2. What are the parameters to measure network performance?

Answer:

There are five parameters to measure network performance.

  • Bandwidth
  • Throughput
  • Latency
  • Bandwidth Delay
  • Jitter

Article Tags :