The performance of a computer network generally refers to several important performance indicators. But in addition to these important performance indicators, there are also some non-performance characteristics that have a great impact on the performance of computer networks.
Performance indicators measure the performance of computer networks from different aspects. The following summarizes seven commonly used performance indicators.
1. Rate
The signals sent by the computer are all in digital form. Bit is the unit of data volume in computers and the unit of information volume used in information theory. The English word bit comes from binary digit (a binary number), so a bit is a 1 or 0 in a binary number. The rate in network technology refers to the rate at which a host connected to a computer network transmits data on a digital channel, also called data rate or bit rate. The unit of rate is b/s (bits per second) or bit/s, which can also be written as bps, that is, bit per second. When the data rate is higher, you can use kb/s (k=10^3=thousand), Mb/s (M=10^6=mega), Gb/s (G=10^9=gear) or Tb/ s(T=10^12=too). Nowadays, simpler and less strict notation is generally used to describe the speed of the network, such as 100M Ethernet, and b/s is omitted, which means Ethernet with a data rate of 100Mb/s. The data rate here usually refers to the rated rate.
2. Bandwidth
Bandwidth originally contains two meanings:
(1) Bandwidth originally refers to the frequency bandwidth of a certain signal. The bandwidth of a signal refers to the frequency range occupied by various frequency components contained in the signal. For example, the standard bandwidth of telephone signals transmitted over traditional communication lines is 3.1 kHz (from 300 Hz to 3.1 kHz, the frequency range of the main components of sound). The unit of bandwidth in this sense is Hertz. In the past, the backbone lines of communication transmitted analog signals (that is, continuously changing signals). Therefore, the signal frequency band range that the communication line allows to pass is the bandwidth of the line.
(2) In computer networks, loan is used to represent the ability of the network's communication lines to transmit data. Therefore, network bandwidth represents the amount of data that can be passed from one point on the network to another in unit time. "Maximum data volume". The unit of bandwidth in this sense is "bits per second", which is b/s. Units like zi are usually preceded by multiples such as thousand (k), trillion (M), lucky (G), and tai (T).
3. Throughput
Throughput (throughput) represents the amount of data passing through a certain network (or channel, interface) in a unit time. Throughput is a measurement of real-world networks to know how much data is actually able to pass through the network. Obviously, throughput is limited by the bandwidth of the network or the rated rate of the network. For example, for a 100Mb/s Ethernet, its rated rate is 100Mb/s, then this value is also the absolute upper limit of the throughput of the Ethernet. Therefore, for a 100Mb/s Ethernet, the typical throughput may be only 70Mb/s.
4. Delay
Delay refers to the time required for data (a message or packet) to be transmitted from one end of the network (or link) to the other end. Latency is a very important performance indicator, which can also be called delay or delay.
The delay in the network consists of the following parts:
(1) Transmission delay The transmission delay is the time required for the host or router to send a data frame, that is, from the The time required from the first bit of the data frame to the completion of sending the last bit of the frame. Sending delay can also be called transmission delay. Sending delay = data frame length (b)/sending rate (b/s).
For a certain network, the sending delay is not fixed, but is directly proportional to the length of the sent frame and inversely proportional to the sending rate.
(2) Propagation delay Propagation delay is the time it takes for electromagnetic waves to propagate a certain distance in the channel.
Propagation delay = channel length (m)/propagation rate of electromagnetic waves on the channel (m/s)
The propagation rate of electromagnetic waves in free space is the speed of light, which is 3.0× 10^5 km/s.
The propagation rate of electromagnetic waves in network transmission media is lower than that in free space. The propagation rate in copper cables is about 2.3×10^5 km/s, and the propagation rate in optical fibers is about 2.0×10^5 km/ s.
(3) Processing delay When a host or router receives a packet, it takes a certain amount of time to process it, analyze the packet header, extract the data part from the packet, perform error checking, find the appropriate route, etc. This is This creates a processing delay.
(4) Queuing delay When packets are transmitted through the network, they have to pass through many routers. However, after the packet enters the router, it must first be queued in the input queue and wait for processing. After the router determines the forwarding interface, it must be queued in the output queue to wait for forwarding. This creates queuing delays. The queuing delay usually depends on the traffic volume on the network at that time.
In this way, the total delay of data in the network is
Total delay = sending delay + propagation delay + processing delay + queuing delay
For high-speed network links, what is improved is only the data transmission rate rather than the propagation rate of bits on the link. The propagation rate of electromagnetic waves carrying information on communication lines has nothing to do with the data transmission rate. The increased data sending rate only reduces the data sending delay.
5. Delay-bandwidth product
Multiply the above two measures of network performance, propagation delay and bandwidth, and wait for another metric: propagation delay-bandwidth product , that is,
Delay-bandwidth product = propagation delay × bandwidth
For example, if the propagation delay is 20ms and the bandwidth is 10Mb/s, then the delay-bandwidth product = 20 × 10 × 10^3 /1000 = 2 × 10^5 bits. This means that if the sender continuously sends data, when the first bit sent is about to reach the end, the sender has already sent 200,000 bits, and these 200,000 bits are moving forward on the link.
6. Round trip time RTT
In computer networks, round trip time RTT is also an important performance indicator, which means from the time the sender sends data to the time the sender receives the data from the receiver. Confirmation, total*** time spent. For the example mentioned above, the round-trip time RTT is 40ms, and the product of round-trip time and bandwidth is 4×10^5 (bit).
Obviously, the round trip time is related to the length of the packet sent. The round trip time for sending very long data blocks should be longer than the round trip time for sending very short data blocks.
The meaning of the round-trip time bandwidth product is that when the sender continuously sends data, it can receive confirmation from the other party in time, but many bits have been sent to the link. For the above example, it is assumed that the receiver of the data discovers the error in time and informs the sender, causing the sender to stop sending immediately, but 400,000 bits have already been sent.
7. Utilization rate
Utilization rate includes channel utilization rate and network utilization rate. Channel utilization indicates what percentage of the time a channel is used. The network utilization is the weighted average of the channel utilization of the entire network. The higher the channel utilization, the better. This is because, according to the queuing theory, when the utilization of a certain channel increases, the delay caused by the channel also increases rapidly.
If D0 represents the delay when the network is idle, and D represents the current network delay, a simple formula (D=D0/(1-U) can be used to express the relationship between D, D0 and utilization U Relationship. The U value is between 0 and 1. When the network utilization approaches the maximum value of 1, the network delay approaches infinity.