A is ATI series, and n is nVIDIA series.
Graphics chip: for example, GeForce 7300GT.
Memory type: DDRII or DDRIII.
Memory width and memory capacity128bits/128m256bits/128m256bits /256M.
Bus interface: the speed of PCI-E or AGP PCI-E is 1X to 16X, and the highest speed is 8000.
This is the most intuitive.
Interface type refers to the interface type used to connect the video card and the motherboard. The interface of the graphics card determines the maximum bandwidth of data transmission between the graphics card and the system, that is, the maximum amount of data that can be transmitted instantly. Different interfaces determine whether the motherboard can use this graphics card. Only when there is a corresponding interface on the motherboard can the graphics card be used, and different interfaces can bring different performance to the graphics card.
At present, all kinds of 3D games and software require more and more graphics cards, and the amount of data that needs to be exchanged between the motherboard and the graphics card is also increasing. The previous graphics card interface can no longer meet such a large amount of data exchange, so usually the motherboard has a slot for inserting graphics cards. If the transmission speed of the graphics card interface cannot meet the demand of the graphics card, the performance of the graphics card will be greatly limited, and even the best graphics card will not be able to play. Up to now, several interfaces such as ISA, PCI, AGP and PCI Express have appeared in the development of graphics cards, and the data bandwidth they can provide has increased in turn. Among them, PCI Express interface introduced in 2004 has become the mainstream to solve the bottleneck problem of data transmission between graphics cards and systems, while graphics cards with ISA and PCI interfaces have been basically eliminated.
The maximum resolution of the graphics card refers to the number of pixels that the graphics card can depict on the display. As we all know, the picture displayed on the display is composed of pixels, and all the data of these pixels are provided by the graphics card. Maximum resolution means that the graphics card is output to the display, and the number of pixels can be depicted on the display. The greater the resolution, the more pixels the image can display, the more details it can display, and of course, the clearer it will be.
The maximum resolution is directly related to the video memory to some extent, because the data of these pixels are initially stored in the video memory, so the video memory capacity will affect the maximum resolution. When the memory capacity of early graphics cards was only 5 12KB, 1MB, 2MB, etc. Memory capacity is really the bottleneck of maximum resolution. But at present, the memory capacity of mainstream graphics cards, even 64MB, has been eliminated. The mainstream entertainment graphics cards are already 128MB, 256MB or 5 12MB, and some professional graphics cards even have 1GB of memory. In this case, the memory capacity is no longer a factor affecting the maximum resolution, and the reason why such a large memory is needed is only because now.
What determines the maximum resolution now is actually the RAMDAC frequency of the graphics card. At present, the Ramdacs of all mainstream graphics cards have reached 400MHz, at least reaching the maximum resolution of 2048x 1536, and the maximum resolution of the latest generation of graphics cards is as high as 2560x 1600.
In addition, the maximum display resolution that the graphics card can output does not mean that your computer can achieve such a high resolution. There must be strong display support, that is, the maximum resolution of the display needs to match the maximum resolution of the graphics card. For example, to achieve the resolution of 2048x 1536, not only the graphics card but also the display should be supported. The maximum resolution of CRT display is mainly determined by its bandwidth, while the maximum resolution of LCD display is mainly determined by its panel. At present, the mainstream display, 17-inch CRT, has a maximum resolution of only 600x 1200, while 17-inch and 19-inch LCDs have only 1280x 1024, so the bottleneck of the maximum resolution of ordinary computer systems is not. The maximum resolution of 2048x 1536 or even 2560x 1600 can only be achieved with the help of professional-grade large-screen high-end monitors, such as Dell's 30-inch LCD monitor, which can reach the ultra-high resolution of 2560x 1600.
The display chip is the core chip of the graphics card, and its performance directly determines the performance of the graphics card. Its main task is to process the video information input by the system, and construct and render it. The performance of the display main chip directly determines the performance of the graphics card. Different display chips have different internal structure and performance, and their prices are also very different. The position of display chip in the graphics card is equivalent to the position of CPU in the computer, which is the core of the whole graphics card. Because of the complexity of display chips, only NVIDIA, ATI, SIS, 3DLabs and other companies design and manufacture display chips. All home entertainment graphics cards use single-chip display chips, while some professional workstation graphics cards use multi-display chips.
The bit width of the display chip refers to the bit width of the data bus inside the display chip, that is, the number of data transmission bits used inside the display chip. At present, the mainstream display chips basically adopt a bit width of 256 bits, and adopting a larger bit width means that the amount of data that can be transmitted in an instant is larger under the condition of constant data transmission speed. Just like valves with different diameters, the larger the diameter, the larger the water flow rate. The bit width of the display chip is the bandwidth of the internal bus of the display chip. The larger the bandwidth, the faster the computing power and data throughput can be provided, which is one of the important data that determines the level of display chips. At present, the largest display chip with a bit width of 565,438+02 bits has been launched. This is the Parhelia-565,438+02 graphics card launched by Matrox, which is the first display chip with a bit width of 565,438+02 bits in the world. At present, all mainstream display chips on the market, including NVIDIA's GeForce series graphics cards and ATI's Radium Series, adopt 256-bit width. The two largest display chip manufacturers in the world will also adopt 5 12 bit width in the next few years.
The increase of bit width of display chip does not mean that the performance of the chip is stronger, because the integration of display chip is quite high, and the design and manufacture require high technical ability. Simply emphasizing the bit width of the display chip is of little significance. Only when other components, chip design, manufacturing process and other aspects are fully coordinated can the function of displaying chip bit width be reflected.
The bit width of the video memory is the number of bits that the video memory can transmit in one clock cycle. The greater the number of bits, the greater the amount of data that can be transmitted instantly, which is one of the important parameters of video memory. At present, there are three memory bit widths on the market: 64 bits, 128 bits and 256 bits. People used to call 64-bit graphics cards, 128-bit graphics cards and 256-bit graphics cards all refer to their corresponding memory bit widths. The higher the memory bit width, the higher the performance and the higher the price. Therefore, high-end graphics cards use more 256-bit wide memory, and mainstream graphics cards are basically 128-bit memory.
As we all know, memory bandwidth = memory frequency x memory bit width /8, so in the case of the same memory frequency, the memory bit width will determine the size of memory bandwidth. For example, for 128-bit and 256-bit memories with the same memory frequency of 500MHz, their memory bandwidth will be:128 = 500mhz *128 ∕ 8 = 8gb/s, while 256-bit = 500 MHz * 256 ∕.
The memory of graphics card is composed of pieces of memory chips, and the total bit width of memory is also composed of the bit width of memory particles. Storage bit width = storage particle bit width × storage particle number. Each storage particle has the storage number of the relevant manufacturer. You can look up the number on the Internet, find out its bit width, and then multiply it by the number of memory particles to get the bit width of the graphics card. This is the most accurate method, but it is more troublesome to implement.
Memory clock period is the repetition period of memory clock pulse, and it is an important index to measure memory speed. The faster the memory speed, the greater the amount of data exchanged per unit time, and the performance of graphics cards will be significantly improved under the same conditions. The clock cycle of video memory is generally in ns (nanosecond) and the working frequency is in MHz. The clock period of the video memory corresponds to the working frequency one by one, and the relationship between them is: working frequency = 1 ÷ clock period × 1000. The memory frequency is 166MHz, and its clock period is1÷166x1000 = 6ns.
For DDR SDRAM or DDR2 or DDR3 memory, the equivalent output frequency is used to describe its working frequency. Because data can be transmitted at the rising and falling edges of the clock cycle, the bandwidth of video memory is twice that of SDRAM at the same working frequency and data bit width. That is to say, under the same clock cycle, the equivalent output frequency of DDR SDRAM memory is twice that of SDRAM memory. For example, the working frequency of 5ns SDRAM memory is 200MHz, while the equivalent working frequency of 5ns DDR SDRAM or DDR2 and DDR3 memory is 400MHz. Common memory clock cycles are 5ns, 4ns, 3.8ns, 3.6ns, 3.3ns, 2.8ns, 2.0ns, 1.6ns, 1. 1ns, or even lower.
The core frequency of the graphics card refers to the working frequency of the display core, which can reflect the performance of the display core to a certain extent, but the performance of the graphics card is determined by the core frequency, video memory, pixel pipeline and pixel filling rate. Therefore, in the case of different display cores, high core frequency does not mean that this graphics card has strong performance. For example, the core frequency of 9600PRO reaches 400MHz, which is higher than 380MHz of 9800PRO, but it is definitely better than 9600PRO in performance. In the same level of chips, the higher the core frequency, the better the performance. Improving the core frequency is one of the ways to overclock the graphics card. Only ATI and NVIDIA are mainstream display chips, and both companies provide display cores to third-party manufacturers. Under the same display core, some manufacturers will appropriately increase the display core frequency of their products to make them work at a fixed frequency higher than the display core to achieve higher performance.