3D Variability Analysis Errors - v2.9.0

I tried to run the new 3D variability analysis in V2.9 but I keep getting this error after the 2nd round of initial reconstruction begins. I have ~300K particles in the dataset (pixel size 320), the resolution is limited to 5-6A and everything else has the default settings. Any idea what may be the problem?

Initial reconstruction 2 of 2

batch 1 of 297

Traceback (most recent call last):
  File "cryosparc2_worker/cryosparc2_compute/run.py", line 78, in cryosparc2_compute.run.main (/home/installtest/deps_manage/cryosparc2_package/deploy/stage/cryosparc2_worker/cryosparc2_compute/run.c:3954)
  File "cryosparc2_worker/cryosparc2_compute/jobs/var3D/run.py", line 164, in cryosparc2_compute.jobs.var3D.run.run (/home/installtest/deps_manage/cryosparc2_package/deploy/stage/cryosparc2_worker/cryosparc2_compute/jobs/var3D/run.c:13113)
  File "cryosparc2_worker/cryosparc2_compute/engine/newengine.py", line 641, in cryosparc2_compute.engine.newengine.EngineThread.project_model (/home/installtest/deps_manage/cryosparc2_package/deploy/stage/cryosparc2_worker/cryosparc2_compute/engine/newengine.c:15619)
  File "cryosparc2_worker/cryosparc2_compute/engine/newcuda_kernels.py", line 2700, in cryosparc2_compute.engine.newcuda_kernels.slice_volumes (/home/installtest/deps_manage/cryosparc2_package/deploy/stage/cryosparc2_worker/cryosparc2_compute/engine/newcuda_kernels.c:4607)
  File "cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py", line 362, in cryosparc2_compute.engine.cuda_core.context_dependent_memoize.wrapper (/home/installtest/deps_manage/cryosparc2_package/deploy/stage/cryosparc2_worker/cryosparc2_compute/engine/cuda_core.c:11672)
  File "cryosparc2_worker/cryosparc2_compute/engine/newcuda_kernels.py", line 2675, in cryosparc2_compute.engine.newcuda_kernels.get_slice_volumes_kernel (/home/installtest/deps_manage/cryosparc2_package/deploy/stage/cryosparc2_worker/cryosparc2_compute/engine/newcuda_kernels.c:3810)
  File "/prog/cryosparc2/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/compiler.py", line 291, in __init__
    arch, code, cache_dir, include_dirs)
  File "/prog/cryosparc2/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/compiler.py", line 254, in compile
    return compile_plain(source, options, keep, nvcc, cache_dir, target)
  File "/prog/cryosparc2/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/compiler.py", line 137, in compile_plain
    stderr=stderr.decode("utf-8", "replace"))
CompileError: nvcc compilation of /tmp/tmpcmoIOp/kernel.cu failed
[command: nvcc --cubin -arch sm_61 -I/prog/cryosparc2/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/cuda kernel.cu]
kernel.cu(130): error: identifier "__shfl_down_sync" is undefined

kernel.cu(254): error: identifier "__shfl_down_sync" is undefined

kernel.cu(315): error: identifier "__shfl_down_sync" is undefined

kernel.cu(378): error: identifier "__shfl_down_sync" is undefined

4 errors detected in the compilation of "/tmp/tmpxft_00006040_00000000-7_kernel.cpp1.ii".

Hi @Itai,

What version of CUDA are you using? This job requires CUDA 9+.

To recompile cryoSPARC with a different CUDA toolkit, use the command:
cryosparc2_worker/bin/cryosparcw newcuda <path_to_cuda>

If you have installed cryoSPARC using the “standalone” method (cryosparc2_master and cryosparc2_worker are on a single machine) OR your worker nodes all share the same filesystem, then you will only have to run this command on a worker node once. If they do not, you will have to run this command on each worker node connected to your instance.

e.g: cryosparcw newcuda /usr/local/cuda-10.1

Hi @stephan - I just tried this, and after running the command to recompile with CUDA 10.1 (and restarting cryosparc) I get a different error, complaining about CUBLAS - thoughts?


Hey @olibclarke,

What GPU did this job acquire (model)? Also, on this workstation, are you able to run nvidia-smi? If you aren’t, you might need to reinstall the CUDA driver (not the toolkit) on that machine.

Ah you’re right - nvidia-smi was working, but after upgrading to CUDA 10.1 it now gives a driver/library mismatch error… hopefully will be solved by a reboot…


Upgrading to CUDA 10.1 solved the problem for me, too. Everything seems to be working fine now.
Thanks for the help

Or maybe I celebrated too early. I get the following error whenever I try to downsample the results using the 3D variability Display job. Running it without downsampling works OK.

Traceback (most recent call last):
File “cryosparc2_worker/cryosparc2_compute/run.py”, line 78, in cryosparc2_compute.run.main (/home/installtest/deps_manage/cryosparc2_package/deploy/stage/cryosparc2_worker/cryosparc2_compute/run.c:3954)
File “cryosparc2_worker/cryosparc2_compute/jobs/var3D/run_disp.py”, line 132, in cryosparc2_compute.jobs.var3D.run_disp.run (/home/installtest/deps_manage/cryosparc2_package/deploy/stage/cryosparc2_worker/cryosparc2_compute/jobs/var3D/run_disp.c:5146)
File “cryosparc2_compute/newfourier.py”, line 215, in ZT
assert M % 2 == 0

Hi @Itai,

Are you using an even number for the “downsample” parameter?

So this field asks for the total size in pixels? I thought it meant the actual pixel size. If that’s the case then I understand what went wrong. Thanks!

Hello all,

I have a problem to run 3D variability analysis. I looked up a previous email chain and don’t see what is my problem.

  1. Cuda and Nvidia drivers seem OK:

  1. 3D variability analysis started OK (initial reconstruction 1 of 2) , but failed after during next reconstruction (2 of 2).

Any suggestions what is wrong?

Thank you,

Hi @smv,

What are the contents of your cryosparc2_worker/config.sh file?

Hi @stephan,
Here it is:
export CRYOSPARC_LICENSE_ID=“6ab8a7c0-a256-11e8-86b3-9beae664ff8c”
export CRYOSPARC_CUDA_PATH="/usr/local/cuda-8.0"


It looks like your cryoSPARC instance is still compiled with CUDA 8.0. Please follow my instructions from the post above to recompile cryoSPARC (pycuda) using CUDA 10.1:

Hi @stephan,

Thank you, re-compiling to Cuda 10.1 did the trick – it works now.

Thank you for help,