It is clear that bandwidth is used to describe the width of a frequency band, but in digital transmission, bandwidth is also commonly used to measure the ability to transmit data.
It is used to indicate the size of the data capacity transmitted per unit of time, indicating the ability to throughput data.
For networks
Bandwidth is the maximum amount of data traffic per unit of time, or the maximum number of binary bits that can be transmitted per unit of time.
And 1M bandwidth refers to 1Mbps = 1megabitspersecond For example, an ordinary telephone line is theoretically 8M bandwidth, and the said 2M line, 155M line are said to be the bandwidth, bandwidth and refers to the physical transmission media related.
Now because the bandwidth and rate is not good to distinguish, but also often used to describe the rate, 1M bandwidth can hit the maximum speed is 112KBytes / s or so.
For example, now I say to the customer to provide 1M bandwidth means that the rate is limited to the rate of 1M bandwidth, while the bandwidth utilization is the average bandwidth occupancy rate when transmitting data, for example, ordinary Internet access, any access to the external connection to take up the bandwidth, and now the various carriers to provide access to the Internet, such as ADSL, are allowed to have a certain amount of bursts on the rate of the original provision, that is to say, the average speed of 1M bandwidth can be used to speed up the bandwidth, but not the bandwidth utilization.
Obviously, the capacity of the memory determines the size of the "warehouse", and the bandwidth of the memory determines the width of the "bridge".
In addition to memory capacity and memory speed, latency is also critical to performance.
When the CPU needs data from memory, it sends a request that is executed by the memory controller, which then sends the request to memory and reports to the CPU how long it took for the entire cycle (from the CPU to the memory controller, and from memory back to the CPU) to receive the data.
There is no doubt that shortening the entire cycle is also the key to improving memory speed, much like a police officer working on a bridge, whose ability to direct the flow of traffic is one of the factors that determines how smooth it is.
Faster memory technologies can make a significant contribution to overall performance, but increasing memory bandwidth is only part of the solution. It typically takes longer for data to travel between the CPU and memory than it does for the processor to perform its function, and for this reason buffers are widely used.
In fact, the so-called buffers are the first and second level caches of the CPU, which are the "big bridge" between memory and the CPU.
In fact, the first and second level caches are SRAM, which can be loosely interpreted as "memory bandwidth", but nowadays it seems to be more often interpreted as "front side bus", so we will just mention it briefly.
For the record, there is a strong connection between the front side bus and memory bandwidth, which we'll look at more closely later in this test.