Blot particle Error

[CPU: 303.7 MB] Traceback (most recent call last):
File “cryosparc_worker/cryosparc_compute/run.py”, line 84, in cryosparc_compute.run.main
File “cryosparc_master/cryosparc_compute/jobs/template_picker_gpu/run.py”, line 59, in cryosparc_compute.jobs.template_picker_gpu.run.run
File “cryosparc_worker/cryosparc_compute/engine/cuda_core.py”, line 29, in cryosparc_compute.engine.cuda_core.initialize
pycuda._driver.LogicError: cuInit failed: forward compatibility was attempted on non supported HW

Welcome to the forum @yang1.

Please can you post the outputs of the following commands

  1. On the CryoSPARC master
    cryosparcm cli "get_job('P99', 'J199', 'job_type', 'version', 'instance_information', 'status', 'killed_at', 'started_at', 'params_spec')"
    
    where you replace P99 and J199 with the failed job’s actual project and job IDs, respectively
  2. On the CryoSPARC wrker host on which the job failed
    hostname
    uptime
    nvidia-smi
    

20:26:31 up 36 min, 1 user, load average: 1.42, 2.55, 1.66

cryosparcm cli “get_job(‘P99’, ‘J199’, ‘job_type’, ‘version’, ‘instance_information’, ‘status’, ‘killed_at’, ‘started_at’, ‘params_spec’)”
Traceback (most recent call last):
File “/home/meilab/cryosparcuser/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/runpy.py”, line 193, in _run_module_as_main
main”, mod_spec)
File “/home/meilab/cryosparcuser/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/runpy.py”, line 85, in _run_code
exec(code, run_globals)
File “/home/meilab/cryosparcuser/cryosparc/cryosparc_master/cryosparc_compute/client.py”, line 84, in
print(eval(“cli.”+command))
File “”, line 1, in
File “/home/meilab/cryosparcuser/cryosparc/cryosparc_master/cryosparc_compute/client.py”, line 57, in func
assert False, res[‘error’]
AssertionError: {‘code’: 500, ‘data’: None, ‘message’: “OtherError: argument of type ‘NoneType’ is not iterable”, ‘name’: ‘OtherError’}

@yang1 The get_job() command may have failed because it did not contain the correct project and job IDs. Please can you re-run (on the CryoSPARC master) the

cryosparcm cli "get_job('P99', 'J199', 'job_type', 'version', 'instance_information', 'status', 'killed_at', 'started_at', 'params_spec')"

after replacing P99, J199 with the actual project and job IDs of the job where you observed

and post the command’s output?

Please also post the outputs for these commands

  1. on the CryoSPARC master:
    cryosparcm cli "get_scheduler_targets()"
    
  2. on the CryoSPARC worker (may or may not be the same as your CryoSPARC master) in a fresh shell. For the first command, please substitute the actual absolute path to the cryosparcw command on your CryoSPARC worker):
    eval $(/path/to/cryosparc_worker/bin/cryosparcw env) # replace actual path
    env | grep PATH
    /sbin/ldconfig -p | grep -i cuda
    uname -a
    nvidia-smi
    which nvcc
    nvcc --version
    python -c "import pycuda.driver; print(pycuda.driver.get_version())"