The core components of the deep learning server are still CPU, hard disk, memory, GPU, especially many deep learning relies on the GPU's large-scale data processing capabilities, which should emphasize the CPU's computational ability and quantity, while different data on the GPU's video memory requirements are also different.
Today most of them are using RTX3090 to do deep learning, the latest RTX4090 has been listed, the single precision computing power is twice as much as RTX3090, both GPUs are 24G memory; like A100 emphasizes double precision computing power, the memory has two versions of 40G and 80G, and the A6000 is almost the same single precision computing power as RTX3090, the memory is 48G, which can be referred to. The A6000 has a single-precision computing capability similar to that of the RTX3090, with 48G of video memory.
Of course, the most important thing is still the pocketbook, the A6000 market price is about two times more than the RTX, A100 recently even more than 100,000, it is estimated that you can not buy it, the price is high and out of stock; RTX3090/4090 price is low, cost-effective, which is why most people choose to do their deep learning, this is the market's choice.