Network performance
Network performance refers to measures of service quality of a network as seen by the customer.
There are many different ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modeled and simulated instead of measured; one example of this is using state transition diagrams to model queuing performance or to use a Network Simulator.
Performance measures
The following measures are often considered important:Bandwidth commonly measured in bits/second is the maximum rate that information can be transferredThroughput is the actual rate that information is transferredLatency the delay between the sender and the receiver decoding it, this is mainly a function of the signals travel time, and processing time at any nodes the information traversesJitter variation in packet delay at the receiver of the informationError rate the number of corrupted bits expressed as a percentage or fraction of the total sentBandwidth
The available channel bandwidth and achievable signal-to-noise ratio determine the maximum possible throughput. It is not generally possible to send more data than dictated by the Shannon-Hartley Theorem.Throughput
Throughput is the number of messages successfully delivered per unit time. Throughput is controlled by available bandwidth, as well as the available signal-to-noise ratio and hardware limitations. Throughput for the purpose of this article will be understood to be measured from the arrival of the first bit of data at the receiver, to decouple the concept of throughput from the concept of latency. For discussions of this type, the terms 'throughput' and 'bandwidth' are often used interchangeably.The Time Window is the period over which the throughput is measured. The choice of an appropriate time window will often dominate calculations of throughput, and whether latency is taken into account or not will determine whether the latency affects the throughput or not.
Latency
The speed of light imposes a minimum propagation time on all electromagnetic signals. It is not possible to reduce the latency belowwhere s is the distance and cm is the speed of light in the medium. This approximately means an additional millisecond round-trip delay per 100 km of distance between hosts.
Other delays also occur in intermediate nodes. In packet switched networks delays can occur due to queueing.
Jitter
Jitter is the undesired deviation from true periodicity of an assumed periodic signal in electronics and telecommunications, often in relation to a reference clock source. Jitter may be observed in characteristics such as the frequency of successive pulses, the signal amplitude, or phase of periodic signals. Jitter is a significant, and usually undesired, factor in the design of almost all communications links. In clock recovery applications it is called timing jitter.Error rate
In digital transmission, the number of bit errors is the number of received bits of a data stream over a communication channel that have been altered due to noise, interference, distortion or bit synchronization errors.The bit error rate or bit error ratio is the number of bit errors divided by the total number of transferred bits during a studied time interval. BER is a unitless performance measure, often expressed as a percentage.
The bit error probability pe is the expectation value of the BER. The BER can be considered as an approximate estimate of the bit error probability. This estimate is accurate for a long time interval and a high number of bit errors.
Interplay of factors
All of the factors above, coupled with user requirements and user perceptions, play a role in determining the perceived 'fastness' or utility, of a network connection. The relationship between throughput, latency, and user experience is most aptly understood in the context of a shared network medium, and as a scheduling problem.Algorithms and protocols
For some systems, latency and throughput are coupled entities. In TCP/IP, latency can also directly affect throughput. In TCP connections, the large bandwidth-delay product of high latency connections, combined with relatively small TCP window sizes on many devices, effectively causes the throughput of a high latency connection to drop sharply with latency. This can be remedied with various techniques, such as increasing the TCP congestion window size, or more drastic solutions, such as packet coalescing, TCP acceleration, and forward error correction, all of which are commonly used for high latency satellite links.TCP acceleration converts the TCP packets into a stream that is similar to UDP. Because of this, the TCP acceleration software must provide its own mechanisms to ensure the reliability of the link, taking the latency and bandwidth of the link into account, and both ends of the high latency link must support the method used.
In the Media Access Control layer, performance issues such as throughput and end-to-end delay are also addressed.