I have recently installed RTX5080 GPUs on my server running cryosparc v4.6.2. Lastest nvidia display driver 570.124.04, Cuda 12.8 on Alma Linux 9.5 with latest updates. All RTX4070 GPUs on the same machine work fine with this configuration. But when running a cryosparc GPU job (here, helical refinement) on the 5080, it ends with this error (last 3 lines):
File “/home/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/cuda/cudadrv/driver.py”, line 2888, in init
cc_enum = getattr(binding.CUjit_target,
File “/home/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/enum.py”, line 437, in getattr
raise AttributeError(name) from None
AttributeError: CU_TARGET_COMPUTE_120. Did you mean: ‘CU_TARGET_COMPUTE_20’?
Seems like pycuda needs updating. Is there a workaround?
Thanks and best regards, M
AFAIK CryoSPARC still supports only CUDA 11.X and 4.6.2 comes with 11.8 bundled (so it doesn’t matter what you hawe installed on your system, it uses it’s own CUDA). RTX5000 is not supported by CUDA 11, so no new GPUs for now.
Hi,
The error suggests that CU_TARGET_COMPUTE_120 is incorrect and asks if you meant CU_TARGET_COMPUTE_20. This indicates that the system is unable to recognize the CU_TARGET_COMPUTE_120 value, and instead, it recognizes something like CU_TARGET_COMPUTE_20 (which corresponds to older CUDA compute capability).
Meaning, you may have a CUDA version installed that is incompatible with the expected compute capability or GPU model (for example, CUDA 12, whereas CryoSPARC 4.6.2, which uses CUDA 11).
As 5000 series cards support the newer versions of Cuda you may try to remove Cuda 12 (with caution) from your system and install instead the Cuda 11.x that comes with CryoSparcs 4.6.2. and see if that works. Usually, the GPUs have some backward compatibility but you’ll only know after trying.
All the best.
I’m wondering if this is caused by nVidia dropping 32-bit CUDA/OpenCL support, which appears to have caused a lot of grief for lots of other things as well (stable diffusion, PyTorch, games using PhysX…)
If so, I suspect only a major upgrade to all of the CUDA-related bits of CryoSPARC will solve it.
Blackwell has turned into a right mess of a release. Lots of problems, some at the hardware level, which is absolutely ridiculous.
CryoSPARC does currently bundle 11.8 and therefore does not yet support Blackwell devices. However, support for a newer CUDA version and these devices is on the roadmap, though we do not yet have a timeline to share.
Is RTX5090 supported with the new 4.7 release? If not, can we still replace the bundled cuda with a newer version manually? Would that work? We have 5090s now and cannot wait to try cryosparc on them.
At the moment, we don’t support the 5090 cards and the prerequisite CUDA version because major python packages (e.g. Pytorch) don’t yet have official support. Upgrading CUDA once these key dependencies have support is on our roadmap.
At the moment, if you were to manually upgrade CUDA within the CryoSPARC environment, we are not sure what the effects would be, but some jobs would certainly fail to work (e.g., those that depend on Pytorch).
i tried to build it but with the nightly version PyTorch 2.8, but numba is not able to recognize compute capabilty 12.0. This is where I m stuck. There might be other dependency problems.
Is anyone successfully using cryosparc with a 50-series GPU?
I didn’t think to look if it was supported before building my workstation. I have a 5090, and I just installed the nightly PyTorch version in a virtual environment, but I still get the same error as the original poster.
This is because other dependencies don’t know sm120. You need to manually patch the scripts to fall back on an older sm like 90. But even then its not that easy because its just not optimized for sm120 and you can not flag or patch everything. I spend some hours to try this.
I hope the development is fast because we just bough a workstation with 2x 5090 and 2x 6000 Pro and I got rid of my personal older GPUs and sit on 2 useless Blackwell GPUs now