Nvidia A16 64GB GPU

Hi

We are setting up a new GPU workstation for dataprocessing using CryoSPARC and our vendor suggested 4x Nvidia A16 64GB GPU (A16 GPU : Take Remote Work to the Next Level | NVIDIA)
However, before asking for a quote I would like to make sure if this GPU can be used for CryoSPARC. My knowledge about this is literally none existing, so if anyone can confirm if it is worth going for this GPU’s would be very much appreciated.

Thanks in advance.
Quinten

From the datasheet, the A16 is a quad-GPU design. So it’s not 64GB, but 4x16GB GPUs on a single board. So 16 GPUs in total…!

It is allegedly Ampere-based, which does work for CryoSPARC, but that’s going off the datasheets. Comparing datasheets, it seems significantly weaker than even the A4000. If it is Ampere, and those specs are true, if left up to me, I’d avoid it unless it is extremely cheap relative to other options.

Thank you for the reply.

They also suggested the following as an alternative: 4x NVIDIA L4 24GB GPUs NVIDIA L4 Tensor Core GPU

Do you think this one is a better alternative compared to A16 or A4000?

Thanks in advance!
Quinten

Those are quite poor GPUs - 72W total. While diminishing returns is a factor, the top gen consumer card this generation (4090) consumes 450W.

Most people are using the following consumer/gaming cards (ordered oldest/weakest to newest/best):

RTX 2080/2080 Ti (marginal)
RTX 3080/3080 Ti (marginal)
RTX 3090/3090 Ti
RTX 4080 (marginal because expensive but not much memory)
RTX 4090

For enterprise/professional cards, most people are using:

A4000 (marginal)
A4500 (marginal)
A5000
A40
A100

Ignoring money, I’d avoid anything less than a 4090 or A40 in terms of performance, personally. You’ll have to figure out what fits in your budget, though, and how much performance matters to you. Nvidia has gone out of their way to obfuscate any sort of performance estimate by tier or naming, but they can’t completely obfuscate wattage, so that’s generally a decent guide.

1 Like

@QuintenG

No, I wouldn’t go with that suggestion either.

It will depend on your budget, but their suggestions are far from ideal for cryo-EM image processing, where the answer to just about every hardware question is: “more”. :rofl:

I’d suggest either a system with 4-8x A5000s (can handle everything except the biggest boxes and some highly specialised samples) or with 6-10x A4000s. Those sort of options perform well and are quite flexible without costing $if-you-have-to-ask-you-can't-afford-it.

We run a mix of hardware ranging wildly in age (I still have a much-loved dual-GTX1080Ti box working on some smaller projects and a quad-2080 box which doesn’t get much use now because 8GB VRAM just isn’t enough any more) but our recent systems run either A4000s (16GB, 6144CCs), A5000s (24GB, 8192CCs) or A6000s (48GB, 10752CCs) (CC = CUDA Cores).

Of course, if you can afford it, get a box stuffed full of A6000s/A40s have done with it. The specifications of the A6000 (Ampere) and A40 (Ampere) are from what I can see the same, just one is active and one passive, which will impact chassis options…

@ccgauvin94

The power consumption of the 4090 is not a good thing. :wink: It’s entirely possible to cut power draw of the 4090 by nearly a third while only losing around 10% of the performance. Then the crazy coolers wouldn’t have been necessary…

I dislike suggesting modern consumer cards myself simply because the coolers are so ridiculous - triple or quad-slot coolers are not compatible with a multi-GPU setup. For a “simple” workstation with dual-GPUs it’s not outside the realm of possibility, but still not ideal.

I usually check VRAM and CC, but that can get difficult when comparing different generations.

You may additionally want to consider VRAM bandwidth (guide), even though that property may somewhat correlate with

mentioned earlier.

I know what to do :wink:
Thank you all for the remarks and tips!

Hey Quinten,
would you mind to share the configuration you decided to go for at the end? Would be curious!
Ulrich

We build a 16 GPU system with A4000 (Ampere) and are very happy with it. Single slot and low power usage meant we could fit 16 into a single chassis.

1 Like

Hi Ulrich

We decided to go for the A16 GPU for a few reasons. First of all the price (as the other more powerful available gpu was 3x the price) and secondly for the configuration and support. In theory we could go for A4000 or RTX but if we would do this our vendor was not going to take responsibility for arising issues as they do not support consumer cards in their workstation. Hence we chose the A16.

1 Like

The Quadro A4000 is not a consumer card, but I’ll be interested in your experiences with those A16 cards. :slight_smile: