NVIDIA RTX 2080: nvcc fatal : Value 'sm_75' is not defined for option 'gpu-architecture'

A user reported the following error:

Traceback (most recent call last):
  File "cryosparc2_worker/cryosparc2_compute/run.py", line 78, in cryosparc2_compute.run.main
  File "cryosparc2_worker/cryosparc2_compute/jobs/template_picker_gpu/run.py", line 112, in cryosparc2_compute.jobs.template_picker_gpu.run.run
  File "cryosparc2_worker/cryosparc2_compute/jobs/template_picker_gpu/run.py", line 163, in cryosparc2_compute.jobs.template_picker_gpu.run.run
  File "/data/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/gpuarray.py", line 549, in fill
    func = elementwise.get_fill_kernel(self.dtype)
  File "<decorator-gen-124>", line 2, in get_fill_kernel
  File "/data/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/tools.py", line 430, in context_dependent_memoize
    result = func(*args)
  File "/data/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/elementwise.py", line 496, in get_fill_kernel
    "fill")
  File "/data/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/elementwise.py", line 161, in get_elwise_kernel
    arguments, operation, name, keep, options, **kwargs)
  File "/data/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/elementwise.py", line 147, in get_elwise_kernel_and_types
    keep, options, **kwargs)
  File "/data/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/elementwise.py", line 75, in get_elwise_module
    options=options, keep=keep)
  File "/data/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/compiler.py", line 291, in __init__
    arch, code, cache_dir, include_dirs)
  File "/data/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/compiler.py", line 254, in compile
    return compile_plain(source, options, keep, nvcc, cache_dir, target)
  File "/data/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/compiler.py", line 78, in compile_plain
    checksum.update(preprocess_source(source, options, nvcc).encode("utf-8"))
  File "/data/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/compiler.py", line 55, in preprocess_source
    cmdline, stderr=stderr)
CompileError: nvcc preprocessing of /tmp/tmpbheC0K.cu failed
[command: nvcc --preprocess -arch sm_75 -I/data/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/cuda /tmp/tmpbheC0K.cu --compiler-options -P]
[stderr:
nvcc fatal   : Value 'sm_75' is not defined for option 'gpu-architecture'
]

Cuda version:

/usr/local/cuda/bin/nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Tue_Jan_10_13:22:03_CST_2017
Cuda compilation tools, release 8.0, V8.0.61

This is on a new system with four GeForce RTX 2080 Ti’s.

I suspect this is a problem with Cuda 8 and the new cards. In the past we’ve had good luck with 1080 Ti’s but those are no longer available.

Here’s the Cryosparc version:

CryoSPARC System master node installed at
/data/cryosparc/cryosparc2_master
Current cryoSPARC version: v2.5.0

What is your cuda version? We recently found that only CUDA 10 worked for 2080TI (sm75).
With CUDA9.2 or 8.0 we got the same error when testing GPU computation in cryoSPARC2.

Edit: Just found that you have said it is CUDA 8. Yep, that’s the problem.

It’s Cuda 8 as per the requirements at https://cryosparc.com/docs/reference/install/ :

  • must have installed CUDA 8.0 Please note that CUDA versions greater than 8.0 are not yet supported by cryoSPARC.

v2.5.0 now supports CUDA 10 - I think I read it in the update log… Anyways CUDA10 worked for the 2080TI/CS2 installation that I helped setup in December.

On the other hand I am not sure how the other programs would react to CUDA10. I think RELION is OK. But Gctf does not seem to have a CUDA10 build yet (not sure if any of the other builds runs on CUDA10).

It seems to be OK to install both CUDA8.0 and CUDA10.0 on the same machine. You can try leaving the /usr/local/cuda directed to /usr/local/cuda-8.0 while also having a /usr/local/cuda-10.0.
Still keep LD_LIB… directed to the symbolic linked directory:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64.

Then during cryoSPARC2 installation (or re-configuration?) you can try telling cryoSPARC2 that CUDA is at /usr/local/cuda-10.0. If this doesn’t work then probably you need to direct /usr/local/cuda to the 10.0 folder. If anything refuses to run with CUDA 10, maybe it is simpler to just wait for them to update. I think in our case I just left CUDA10 as the system CUDA in the end.

I do not own the 2080TI machine so these are just from the one-day test we did on that machine.

2 Likes

Hi @mick-t,

@ZhijieLi is correct. We’ve just updated our site! Sorry for any confusion.

1 Like

Thanks for that, I just noticed it in the update section on my dashboard:

v2.4.6
Nov. 30, 2018

    Update: cryoSPARC now supports CUDA 9 and 10.
    Note: Updating to this version will trigger a reinstallation of cryoSPARC dependencies. These are included in the installation bundle and should not cause any issues but updating could take longer than normal.
    Fixed: Bug in GCTF wrapper that caused problems when run in per-particle local CTF refinement mode.
    Fixed: Bug in exposure curation when some micrographs contained no particles.

I’ll install Cuda 10 and repoint the link and will update the thread.

Hi @stephan

The docs now say:

  • must have installed CUDA

That seems a little vague! :slight_smile:

Easy reinstall:

cryosparcm update --override

Will run a few jobs and report back. Thanks for the help.

Yes that did the trick.

First stop Cryosparc.

Then as root:

yum install cuda-10-0;
cd /usr/local/; rm cuda; ln -s cuda-10.0 cuda;

After you’ve install Cuda 10, as the Cryosparc user force an update:

cryosparcm update --override

1 Like

Do you know a way around this as I run other programmes with CUDA 8.0 dependencies.

I have a similar set up: version 2.14.2 of cryosparc with CUDA 8.0 and 3x2080Ti GPUs.

Or otherwise is there a simple way to have 2 functional versions of CUDA on one computer?

Yes you can install multiple versions of Cuda, I do it all the time. On most Linux systems (I use Centos at work which is a Red Hat derivative ) /usr/local/cuda is a symbolic link to that last version of Cuda that you installed. Which ever cuda version /usr/local/cuda points to will be the default for the system. I don’t remember if you can specify what PATH Cryosparc loads Cuda and I don’t use Cryosparc anymore and can’t test.

To summarize, you can install multiple versions of Cuda, point the symbolic link /usr/local/cuda to point to what you want to be the default for the system.

If you can’t get that to work, I would suggest creating a new thread in the forum.

Perfect, thank you! I’ll give it a go and keep the symbolic link for version 8.0