"IndexError: list index out of range" When queueing a job

Just installed CryoSparc as a Standalone installation on an Exxact workstation running Centos v7 . Installation seemed to go smoothly.

Launched the program from the cryosparc_user account and launched the web interface.
Can create projects and workspaces without issue, but when trying to import moves/files from either the test data for the 20S or my own data following the tutorial, I get the error “list index out of range” when the queue button is hit.

Any pointers would be appreciated.

Thanks. Neal.

Results from running cryosparcm log command_core

Jobs Queued: [(u’P1’, u’J5’), (u’P2’, u’J1’), (u’P3’, u’J1’)]
Licenses currently active : 0
Now trying to schedule J5
Need slots : {}
Need fixed : {}
Master direct : True
Scheduling directly onto master node c105732
Not a commercial instance - heartbeat set to 12 hours.
Traceback (most recent call last):
File “cryosparc2_command/command_core/ init .py”, line 187, in background_worker
scheduler_run_core() # sets last run time
File “cryosparc2_command/command_core/ init .py”, line 1636, in scheduler_run_core
‘lane’ : lanes[0][‘name’],
IndexError: list index out of range
****** Scheduler Failed ****

Hey @mwaxham,

Thanks for providing us with your logs. Is it possible that the installation didn’t complete successfully? One thing you can do to check is run the worker node connection function, which will throw errors if the installation failed. To do this:

  1. Change directory to the worker directory:
    cd cryosparc2_worker

  2. Execute the worker connection command:

    ./bin/cryosparcw connect --master <master_hostname> --worker <worker_hostname> --port <port_num> --ssdpath <ssd_path>
    

    In your case, the <master_hostname> and <worker_hostname> values will be the same, since you installed cryoSPARC using the standalone installation script.

  3. If the command executes without any tracebacks or errors, try running the job again, as it should work. If it does return an error, paste it here.

running this command from inside the cryosparc2_worker directory did produce an error: AssertionError: Path /scratch/cryosparc_cache does not exist.

I just reran the command with the --nossd and it executed without error. As soon as this ran successfully jobs that were hung completed without additional prompting on my part and the movies appeared in the import job window(s).

So, all good at this point, hopefully the rest of the steps will also work seamlessly.

I am not sure why the --ssdpath /scratch/cryosparc_cache was a problem as I had this in the original install. If this will create issues in other processing steps, I would like to know how I can fix this?

Thanks for the help.

Spoke too soon. The next step is drift correction and while the interface seemed to work as described in the tutorial, the following error messages appeared:

License is valid.

Launching job on lane default target c105732 ...

Running job on master node hostname c105732

[CPU: 70.9 MB]   Project P1 Job J7 Started

[CPU: 70.9 MB]   Master running v2.14.2, worker running v2.14.2

[CPU: 71.0 MB]   Running on lane default

[CPU: 71.0 MB]   Resources allocated: 

[CPU: 71.0 MB]     Worker:  c105732

[CPU: 71.0 MB]     CPU   :  [0, 1, 2, 3, 4, 5]

[CPU: 71.0 MB]     GPU   :  [0]

[CPU: 71.0 MB]     RAM   :  [0, 1]

[CPU: 71.0 MB]     SSD   :  False

[CPU: 71.0 MB]   --------------------------------------------------------------

[CPU: 71.0 MB]   Importing job module for job type patch_motion_correction_multi...

[CPU: 163.8 MB]  Job ready to run

[CPU: 163.8 MB]  ***************************************************************

[CPU: 164.1 MB]  Job will process this many movies:  20

[CPU: 164.1 MB]  parent process is 113296

[CPU: 132.0 MB]  Calling CUDA init from 113436

[CPU: 280.5 MB]  -- 0.0: processing 0 of 20: J6/imported/14sep05c_00024sq_00003hl_00002es.frames.tif
        loading /home/cryosparc_user/empiar_10025_subset/P1/J6/imported/14sep05c_00024sq_00003hl_00002es.frames.tif
        Loading raw movie data from J6/imported/14sep05c_00024sq_00003hl_00002es.frames.tif ...
        Done in 3.27s
        Loading gain data from J6/imported/norm-amibox05-0.mrc ...
        Done in 0.09s
        Processing ...

[CPU: 504.2 MB]  Traceback (most recent call last):
  File "cryosparc2_compute/jobs/runcommon.py", line 1685, in run_with_except_hook
    run_old(*args, **kw)
  File "/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/threading.py", line 754, in run
    self.__target(*self.__args, **self.__kwargs)
  File "cryosparc2_compute/jobs/pipeline.py", line 153, in thread_work
    work = processor.process(item)
  File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py", line 157, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process
  File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py", line 160, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process
  File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py", line 161, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process
  File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py", line 77, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction
  File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py", line 103, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction
  File "/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/gpuarray.py", line 549, in fill
    func = elementwise.get_fill_kernel(self.dtype)
  File "<decorator-gen-124>", line 2, in get_fill_kernel
  File "/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/tools.py", line 432, in context_dependent_memoize
    result = func(*args)
  File "/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/elementwise.py", line 496, in get_fill_kernel
    "fill")
  File "/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/elementwise.py", line 161, in get_elwise_kernel
    arguments, operation, name, keep, options, **kwargs)
  File "/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/elementwise.py", line 147, in get_elwise_kernel_and_types
    keep, options, **kwargs)
  File "/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/elementwise.py", line 75, in get_elwise_module
    options=options, keep=keep)
  File "/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/compiler.py", line 291, in __init__
    arch, code, cache_dir, include_dirs)
  File "/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/compiler.py", line 254, in compile
    return compile_plain(source, options, keep, nvcc, cache_dir, target)
  File "/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/compiler.py", line 78, in compile_plain
    checksum.update(preprocess_source(source, options, nvcc).encode("utf-8"))
  File "/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/compiler.py", line 55, in preprocess_source
    cmdline, stderr=stderr)
CompileError: nvcc preprocessing of /tmp/tmp4iS0Mx.cu failed
[command: nvcc --preprocess -arch sm_75 -I/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/cuda /tmp/tmp4iS0Mx.cu --compiler-options -P]
[stderr:
nvcc fatal   : Value 'sm_75' is not defined for option 'gpu-architecture'
]

[CPU: 164.3 MB]  Outputting partial results now...

[CPU: 164.3 MB]  Traceback (most recent call last):
  File "cryosparc2_worker/cryosparc2_compute/run.py", line 82, in cryosparc2_compute.run.main
  File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py", line 349, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi
AssertionError: Child process with PID 113436 has terminated unexpectedly!

Hi @mwaxham,

Please take a look at this post:

Thanks, had found these links searching through the forum. On my workstation, there is presently a soft link pointing cuda to v 9.2. Cuda 10 is already installed on this workstation. I’m hesitant to change the pointer to cuda 10.0 which may impact the half dozen other programs I’ve installed that access the GPU.

A specific question is whether there is a way to configure cryosparc with a pointer directly to the cuda version it needs to run (10 apparently, although this is not noted anywhere in the installation), leaving the present softlink to 9.2 intact? If so, what exactly would be the set of commands that need run for cryosparc to establish the proper cuda call?

Thanks again.

Hi @mwaxham,

When you install cryoSPARC using the standalone installation method, you can specify the path to the CUDA toolkit after --cudapath. Since you’ve already installed the cryoSPARC worker, you can recompile your worker.

To recompile cryoSPARC with a different CUDA toolkit, use the command:
cryosparc2_worker/bin/cryosparcw newcuda <path_to_cuda>
e.g: cryosparcw newcuda /usr/local/cuda-10