No, I wouldn’t go with that suggestion either.
It will depend on your budget, but their suggestions are far from ideal for cryo-EM image processing, where the answer to just about every hardware question is: “more”.
I’d suggest either a system with 4-8x A5000s (can handle everything except the biggest boxes and some highly specialised samples) or with 6-10x A4000s. Those sort of options perform well and are quite flexible without costing $if-you-have-to-ask-you-can't-afford-it
.
We run a mix of hardware ranging wildly in age (I still have a much-loved dual-GTX1080Ti box working on some smaller projects and a quad-2080 box which doesn’t get much use now because 8GB VRAM just isn’t enough any more) but our recent systems run either A4000s (16GB, 6144CCs), A5000s (24GB, 8192CCs) or A6000s (48GB, 10752CCs) (CC = CUDA Cores).
Of course, if you can afford it, get a box stuffed full of A6000s/A40s have done with it. The specifications of the A6000 (Ampere) and A40 (Ampere) are from what I can see the same, just one is active and one passive, which will impact chassis options…
The power consumption of the 4090 is not a good thing. It’s entirely possible to cut power draw of the 4090 by nearly a third while only losing around 10% of the performance. Then the crazy coolers wouldn’t have been necessary…
I dislike suggesting modern consumer cards myself simply because the coolers are so ridiculous - triple or quad-slot coolers are not compatible with a multi-GPU setup. For a “simple” workstation with dual-GPUs it’s not outside the realm of possibility, but still not ideal.
I usually check VRAM and CC, but that can get difficult when comparing different generations.