Patch CTF failing with many errors including cuda version mismatch

Hi,
I am new in cryo EM and trying to process the EMPIAR-10291 tutorial.
I am running Cyrosparc v4.6.2
installed as standalone in WSL2 host windows 11
GPU Nvidia RTX 3060 Ti
Cuda v 12.8
Nvidia driver v572.16

patch CTF step is failing with the following log error message for each micrograph.

[CPU: 159.9 MB Avail: 36.94 GB]

Calling CUDA init from 2739
[CPU: 302.9 MB Avail: 36.82 GB]

– 0.0: processing 0 of 300: J9/imported/005478107464413941270_INX6Ndelnano001.mrc
loading /mnt/e/cryosparc_tutorial/CS-tutorial-jan29-2025/J9/imported/005478107464413941270_INX6Ndelnano001.mrc
Loading raw mic data from J9/imported/005478107464413941270_INX6Ndelnano001.mrc …
Done in 0.04s
Processing …
[CPU: 340.9 MB Avail: 36.74 GB]

– 0.0: processing 1 of 300: J9/imported/017639521123493342501_INX6Ndelnano002.mrc
loading /mnt/e/cryosparc_tutorial/CS-tutorial-jan29-2025/J9/imported/017639521123493342501_INX6Ndelnano002.mrc
Loading raw mic data from J9/imported/017639521123493342501_INX6Ndelnano002.mrc …
Done in 0.03s
Processing …
[CPU: 250.4 MB Avail: 36.74 GB]

Error occurred while processing J9/imported/005478107464413941270_INX6Ndelnano001.mrc
Traceback (most recent call last):
File “/mnt/c/Linux_folder/cryosparc/cryosparc_3.2/cryosparc_worker/cryosparc_compute/jobs/pipeline.py”, line 59, in exec
return self.process(item)
File “cryosparc_master/cryosparc_compute/jobs/ctf_estimation/run.py”, line 108, in cryosparc_master.cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_master/cryosparc_compute/jobs/ctf_estimation/run.py”, line 114, in cryosparc_master.cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_master/cryosparc_compute/jobs/ctf_estimation/run.py”, line 115, in cryosparc_master.cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_master/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 66, in cryosparc_master.cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217
File “cryosparc_master/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 122, in cryosparc_master.cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217
File “/mnt/c/Linux_folder/cryosparc/cryosparc_3.2/cryosparc_worker/cryosparc_compute/gpu/gpuarray.py”, line 216, in fill
from .elementwise.fill import fill
File “/mnt/c/Linux_folder/cryosparc/cryosparc_3.2/cryosparc_worker/cryosparc_compute/gpu/elementwise/fill.py”, line 24, in
def fill(arr, x, out):
File “/mnt/c/Linux_folder/cryosparc/cryosparc_3.2/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/np/ufunc/decorators.py”, line 203, in wrap
guvec.add(fty)
File “/mnt/c/Linux_folder/cryosparc/cryosparc_3.2/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/np/ufunc/deviceufunc.py”, line 475, in add
kernel = self._compile_kernel(fnobj, sig=tuple(outertys))
File “/mnt/c/Linux_folder/cryosparc/cryosparc_3.2/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/cuda/vectorizers.py”, line 241, in _compile_kernel
return cuda.jit(sig)(fnobj)
File “/mnt/c/Linux_folder/cryosparc/cryosparc_3.2/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/cuda/decorators.py”, line 133, in _jit
disp.compile(argtypes)
File “/mnt/c/Linux_folder/cryosparc/cryosparc_3.2/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/cuda/dispatcher.py”, line 932, in compile
kernel = _Kernel(self.py_func, argtypes, **self.targetoptions)
File “/mnt/c/Linux_folder/cryosparc/cryosparc_3.2/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/core/compiler_lock.py”, line 35, in _acquire_compile_lock
return func(*args, **kwargs)
File “/mnt/c/Linux_folder/cryosparc/cryosparc_3.2/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/cuda/dispatcher.py”, line 103, in init
self.cooperative = ‘cudaCGGetIntrinsicHandle’ in lib.get_asm_str()
File “/mnt/c/Linux_folder/cryosparc/cryosparc_3.2/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/cuda/codegen.py”, line 119, in get_asm_str
return self._join_ptxes(self._get_ptxes(cc=cc))
File “/mnt/c/Linux_folder/cryosparc/cryosparc_3.2/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/cuda/codegen.py”, line 131, in _get_ptxes
arch = nvvm.get_arch_option(*cc)
File “/mnt/c/Linux_folder/cryosparc/cryosparc_3.2/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/cuda/cudadrv/nvvm.py”, line 414, in get_arch_option
arch = find_closest_arch((major, minor))
File “/mnt/c/Linux_folder/cryosparc/cryosparc_3.2/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/cuda/cudadrv/nvvm.py”, line 387, in find_closest_arch
raise NvvmSupportError(msg)
numba.cuda.cudadrv.error.NvvmSupportError: No supported GPU compute capabilities found. Please check your cudatoolkit version matches your CUDA version.

Cuda toolkit and driver is fresh installed in windows followed by cuda toolkit installation in wsl2.
nvcc and nvidia-smi shows the same cuda driver 12.8. but still the run is failing.

Please help.
Much appreciated.

Update: uninstalled v4.6.2 and installed the v4.3.1. That is running good now!

Very interesting. Please can you post the outputs of these commands on the CryoSPARC worker:

uname -a
nvidia-smi
cd /path/to/cryosparc_worker/
pwd
./bin/cryosparcw call which nvcc
./bin/cryosparcw call nvcc --version
./bin/cryosparcw call python -c "import pycuda.driver; print(pycuda.driver.get_version())"

Following are the logs of those cmds

Linux DESKTOP-VA3O0NG 5.15.167.4-microsoft-standard-WSL2 #1 SMP Tue Nov 5 00:21:55 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

±--------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.183.01 Driver Version: 572.16 CUDA Version: 12.8 |
|-----------------------------------------±---------------------±---------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3060 Ti On | 00000000:04:00.0 Off | N/A |
| 30% 28C P0 32W / 200W | 0MiB / 8192MiB | 0% Default |
| | | N/A |
±----------------------------------------±---------------------±---------------------+

±--------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
±--------------------------------------------------------------------------------------+

/mnt/c/Linux_folder/cryosparc/cryosparc_3.2/cryosparc_worker
(note: although its installed in v3.2, made when I first installed cryosparc 3.2, its v4.3.1 now)

/usr/local/cuda-11.8/bin/nvcc

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0

(11, 8, 0)

Hope it helps to configure the future version for the users like me!

Best