How to assign default lane to cluster

In one cryoSPARC installation we have two worker nodes that were connected to the master node and are now in the “default” lane. Later I created a cluster so I could use Slurm for resource allocation and job scheduling. We would like the cluster lane to be the default instead of the original default lane consisting of our two worker nodes. Is there a way to do that?

Thanks!

Steve

Hi @stevew,

I’d say you can delete the current default lane using the command:
cryosparcm cli "remove_scheduler_lane(<name_of_lane>)" #make sure you include the double quotes

Then connect the cluster into a new lane into cryoSPARC (and call it default, or whatever you’d like): https://cryosparc.com/docs/reference/install/#installation-cluster

Thanks for the suggestion! I stumbled across the same solution earlier and it worked.

Regards,
Steve

Sorry to unearth an old discussion, but I wanted to report that there is one catch with this solution. It works well: all jobs will now be submitted to SLURM.

But somehow the cryosparcm command now considers that cryoSPARC is a cluster installation, even if this was all done on a standalone installation on a workstation (where the master and worker are both installed, without any other worker node than the workstation itself). This changes nothing to the users, but complicates administration a little bit because cryosparcm patch operations now must follow the procedure for cluster installations, i.e. they no longer patch the worker automatically (despite it being on the same computer as the master). This is not difficult to do once one knows what to do and why, but I was puzzled at first when this command would always leave the worker non-patched. I don’t know whether this also affects cryosparcm update.

One might consider SLURM on a “workstation” cryoSPARC installation an “edge” case, well-justified under certain circumstances. I am curious whether you have tried the “Single Workstation” patch procedure and found it not to work in your single workstation SLURM setup?

Yes, this is exactly what I tried: cryosparcm patch on the master, which is also where the worker is since it’s a single workstation. And only the master got patched. I noticed because the next job I tried to run failed on a master/worker version mismatch error. So I read the doc again and followed the manual procedure to patch the worker as for a cluster installation. I have had this happening with the last two patches (May and August 2022).

I agree SLURM on a single workstation is a corner case. I set it up so all programs submit to SLURM (cryoSPARC, RELION, etc.), this way I don’t have to worry about checking if the workstation is idle before running something (regardless of which program). It is especially helpful with several users of the same workstation.

Thank you @Guillaume for this info.
I have checked with our team and can now also confirm that in an instance where workers are assigned to jobs exclusively via a cluster scheduler, the worker needs to be patched using the “cluster” procedure even if that “cluster” is a single master/worker computer.

2 Likes