Nvidias new Ampere architecture: Which GPU to go for?

Dear all,

we are lucky to acquire new GPU workstations just in time with the release of NVs new GPU architecture.
Which card would you go for? Considering rumors that the 3090 will have 24 GB RAM, why would one go for a professional card?
Does by chance anybody had hands on the A100 and can give feedback?

Best,
Tarek

Just a few things to consider before you jump in

  1. As soon as 3090 is released, the prices for 2080Tis will most likely go down.
  2. Although the 3090 has 24GB of VRAM, the power consumption at peak loads are apparently north of 400W.
  3. 3090s also take up 3 slots (vs 2 slots for 2080Tis) so depending on how many PCI slots you will have in your computer, you maybe limited to only 2 cards (which isn’t necessarily a bad thing though)
  4. 3090s will prefer CUDA-11.0 although they should work with CUDA 10-x also. I don’t believe cryosparc has been optimized to work with CUDA-11 yet (@apunjani can give us a better idea of when CUDA-11 support is going to be implemented in cryosparc)

Vamsee

You haven’t been able to buy new 2080Tis all summer, unless from someone with a bunch in a warehouse. Unfortunately the 3070 looks like it could outperform a 2080Ti for $500, but only has 8GB of VRAM. The 3080 has 10GB. Maybe there will eventually be a 3080Ti in the ~16GB range. The 3090 is a monster, it physically won’t even fit in many of the existing server chassis.

How much of an impact would there be processing data on a 8GB or 10GB card these days?

I was hoping there would be more discussion on this topic in the past year, considering the ongoing scarcity of consumer-grade GPU’s. The 3080Ti looks great and slightly better than the A5000 with more cuda cores but lower GPU memory which is only an issue for larger box sizes (I think). I was told the A5000 should have better stability in the long run.
A problem we are currently debating stems from the physical size of these 30xx beasts that limits the expansion and usability of a workstation. Our potential build can easily fit 4x A-series cards, but only 2x 30xx series. I want to put 2x old 1080Ti into a system alongside 2x 3080Ti (impossible) or 2x A5000 or 2x A4000 (maybe we can fit 1?) with a threadripper and have it be multi-use for 24/7 cryoEM processing on 2 GPU’s and moderate use for MD simulations on the 1080Ti’s. The A-series is so expensive, especially the A5000. I keep wondering if the A4000 is even worth getting as it does not seem to be a step up from 2080Ti (but it is compact and we could maybe get 2x more later on). GPU memory is not an issue at our box size for current projects.

Can anyone comment on theoretical cryosparc performance between the A4000, A5000, 2080Ti, and/or 3080Ti? Is the main parameter to consider the number of cuda cores and clock speed?

We have two new systems installed in this year configured with either 2xA100 or 4xA40.
The A100 are definitely a performance boost over 2080ti, I would say 50-100% faster depending on the job and memory usage. Currently I was never limited by the 11Gb ram of the 2080Ti, so the new cards are overkill.
What one definitely needs to consider is that the A cards are designed for servers and come without fans. Our A40 is rated with 250W and we have problems cooling them. Therefore the system is often throttling to minimal clocks.

If they were available better I think the consumer cards are doing well.
The larger memory of the a cards could become a game changer if one makes use of multi instance gpu to split one card in up to 7 virtual cards.
In most cases, at least for cryosparc, you get done faster by running more jobs in parallel rather than having a faster card. There’s still quite some overhead, file transfer time etc. where the top cards have to wait anyway…