Running T20s Benchmark Workflow on a SLURM Cluster


I have been testing an installation on a SLURM cluster by running the T20s Benchmark Workflow job. I am running this job to ensure CryoSPARC executes correctly before it is opened up to end users.

On a Standalone installation it runs fine (although installed on different hardware).

I have found that after the ‘Blob Picker’ step the whole process fails as the ‘Inspect Picks’ step does not respond within the required 120s.

To get around this I modified the file “cryosparc_master/cryosparc_compute/jobs/workflows/” increasing the assert statements to 360s. Inside the functions: get_curated_blob_picks() and run_rigid_local_motion_bench().

Then I restarted CryoSPARC and started again the job again.

The above code changes worked, however the process than failed again after the ‘Refinement New’ step. Both of the ‘3D Class steps’ failed with the same error.

[CPU: 916.2 MB]  Traceback (most recent call last):
  File "cryosparc_worker/cryosparc_compute/", line 85, in
  File "cryosparc_worker/cryosparc_compute/jobs/class3D/", line 635, in
  File "/mnt/userdata/jvanschy/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/", line 411, in check_output
  File "/mnt/userdata/jvanschy/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/", line 488, in run
    with Popen(*popenargs, **kwargs) as process:
  File "/mnt/userdata/jvanschy/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/", line 800, in __init__
    restore_signals, start_new_session)
  File "/mnt/userdata/jvanschy/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/", line 1551, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'zip': 'zip'

Any help would be greatly appreciated.
I am running CryoSPARC 3.3.1.


@ozej8y The traceback suggests that the zip package has not been installed on the relevant computer. Does installation of zip fix the problem?
Inspect particle picks runs on your master node. That node may be less powerful and/or may be under heavier compute load than your cluster’s other nodes, causing the job to take longer than inspected. Timing information for some steps of the Inspect particle picks job is available under the job’s Overview tab; you may have to push Show from top to see all the information. That information may help identify a performance bottleneck that you may be able to address.

Thanks again. Yes zip was not installed on Ubuntu.

I’ll have a closer look at identifying the performance bottle neck.
Your support is much appreciated.