Reserving nodes for CS Live on a cluster?

I am curious about recommended set up for CS2 / CS2 Live. Specifically, would I need to have 2 separate installations, one that can only see nodes for the full processing pipeline and another that only sees separate nodes reserved for on the fly processing? Or can I get away with a single installation and some how mark nodes as reserved exclusively for CS Live to keep them from being queued in SLURM? We have a very small cluster and keeping the two nodes (one per microscope) in general queue would mean that they would never be available on demand for Live processing.
Thank you!

Hello @andreym,

If you have an install running on a SLURM cluster, you could create lanes based on SLURM partitions, something like “p.cryospar1” and “p.cryosparc2”.
That will not forbid people using these partitions, but will look kind of obvious for what they are dedicated. In short, yes, we do have dedicated partitions (and queues).


Hi @andreym,

For your situation, this is not necessary. CryoSPARC will allow you to have multiple “lanes” where you can assign different worker nodes/ cluster partitions/ cluster queues which can be used for cryoSPARC Live and the main cryoSPARC interface.
You can follow @jucastil’s advice for configuring SLURM.

See “Connect a Cluster”:

Thank you, I will go ahead and look through provided resources and suggestions. I appreciate you help!

Hi @stephan,
Do you know if I can upgrade current cryoSparc installation to Live or would I need to uninstall it completely and reinstall using the beta key I have for the Live version?

Thank you

Hi @andreym,

You don’t need to uninstall cryoSPARC. You can update your existing installation to the “live” version of cryoSPARC:

For all the documentation related to cryoSPARC Live, please see:

Hi @stephan,

Could you please help with the config of the json file for multi-lanes? Per earlier discussion, we have normalized the 4 nodes on the computing cluster and the 2 stand alone nodes for on-the-fly processing and updated to SLURM version 20.02.5. Currently, there are 3 partitions. CSLive sees the 3 lanes, however, it can only launch jobs on the main cluster. Trying to submit to either of the on-the-fly nodes results in jobs starting but no output being created (also no errors). Perhaps it is something to do with the config of our cluster_info.json file? Could you please provide an example of the format for more than one partition?

Thank you