I recently updated the cryosparc installation on our university cluster to 5.0.1, and tried submitting jobs through slurm scripts that I had been using until now for doing the same on CS 4.7. However, none of the jobs have started after launching.
The following seems to occur every time, for all jobs that run on a worker node:
The job launches, and the squeue command shows status as R
This immediately turns to CG, and then the job disappears from the list of jobs being run.
If there is an issue with how the job is setup, earlier version used to show the job as failed, but the updated version still shows the status of the job as launched, and the event log throws the following error:
I have not changed or updated the config.sh files in either the cryosparc_master or cryosparc_worker directory. I also tried to see if the cluster_info.json file that cryosparc cluster example slurm in the updated version has any extra command that asks to call $HOSTNAME, but so far, I have not found any. Any thoughts?
Update: I reverted the update back to v4.7.1, and the jobs (that can work on v4.7.1) seem to be running fine now, which makes me wonder whether something has changed in the cryosparcm cluster connect, since it does not seem to be a broken installation issue?
@Arunabh thank you for your post. The error âMissing required argument master_hostnameâ indicates that the cryosparc_worker bundle was not successfully updated. When you next update, could you verify the contents of the cryosparc_worker/version file and confirm that it contains 5.0.1?
Note that for clusters, you have to update the worker manually: Copy the downloaded cryosparc_master/cryosparc_worker.tar.gz installation bundle into cryosparc_worker, navigate to the cryosparc_worker folder and run bin/cryosparcw update
This is a brand new cluster, and the contractor installed v4.7.x.
I updated today, and I can confirm, that my workers are also updated:
[cryosparc@gpubig01 cryosparc_worker]$ cat version
v5.0.1
Checking the logs, the workers searched for the config.sh from the master.
In the job*slurm.stderr there was these lines:
[2026-02-02T15:23:49.693] error: couldn't chdir to /opt/cryosparc/cryosparc_master': No such file or directory: going to /tmp instead [2026-02-02T15:23:49.693] error: couldn't chdir to /opt/cryosparc/cryosparc_master': No such file or directory: going to /tmp instead
So on the master node, the master installation lives in /opt/cryosparc/cryosparc_master/bin/cryosparcm
and on the workers, the worker installation is here /opt/cryosparc/cryosparc_worker
So on each worker, we generated this: [cryosparc@gpubig01 cryosparc]$ ls /opt/cryosparc/cryosparc_master/ config.sh
And now, queuing works as intended.
I guess, the smoking gun is somehow, that the workers requests the config.sh.
Edit:
Obviously, the workers want even more from the master folder:
cat job.log
ERROR: ld.so: object '/opt/cryosparc/cryosparc_master/.pixi/envs/master/lib/libpython3.12.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/cryosparc/cryosparc_master/.pixi/envs/master/lib/libpython3.12.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/cryosparc/cryosparc_master/.pixi/envs/master/lib/libpython3.12.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/cryosparc/cryosparc_master/.pixi/envs/master/lib/libpython3.12.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/cryosparc/cryosparc_master/.pixi/envs/master/lib/libpython3.12.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/cryosparc/cryosparc_master/.pixi/envs/master/lib/libpython3.12.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/cryosparc/cryosparc_master/.pixi/envs/master/lib/libpython3.12.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/cryosparc/cryosparc_master/.pixi/envs/master/lib/libpython3.12.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/cryosparc/cryosparc_master/.pixi/envs/master/lib/libpython3.12.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/cryosparc/cryosparc_master/.pixi/envs/master/lib/libpython3.12.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
================= CRYOSPARC =================
Project P2 Job J8 Master login Port 61000 =========================================================================== MAIN PROCESS PID 47361 2026-02-02 16:26:13,564 core monitor INFO | MONITOR PROCESS PID 47373
The workaround from Version 5 breaks Slurm scripts - #2 by wtempel seems to be working ok, but itâs still unexpected that CS-5.x doesnât seem to have been tested on SLURM properly before the release.
You donât have to have a matching /opt/cryosparc/cryosparc_master/ on the worker nodes.
CryoSPARC v5.0.2 has been released and fixes an issue where, in some environments, cluster job submission scripts would fail with ERROR: ld.so: object 'xxx/lib/libpython3.12.so' from LD_PRELOAD cannot be preloaded due to environment variables that were not correctly cleared.
The #SBATCH --export=NONE slurm option we proposed earlier is
Hi, @wtempel worker jobs fail with Missing option ââlicenseâ. after upgrading to 5.0.2
CRYOSPARC_LICENSE_ID is set in config file.
EDIT: fixed it by adding CRYOSPARC_LICENSE_IDto cryosparc_workerâs config.sh. Previously it was set only in masterâs config and was working fine. Is it the new expected behavior?
CRYOSPARC_LICENSE_ID is always supposed to exist in the workerâs config.sh file. Weâre unsure why it was missing in this case, but now that itâs been added back, CryoSPARC should function normally.