Job submission queue

Hi,

cryoSPARC has a nice job submission queue, which seems to work fine. The only problem with it is that it is unaware of other processes running on the host.

For us with limited amounts of resources, it is hard to dedicate a GPU server to only cryoSPARC jobs.

Will cryoSPARC support queuing systems like SLURM in the future?

Or will there be a way to print the command for each run instead of submitting it, so it afterwards can be submitted with e.g. SLURM?
I know this of course requires you to expand the multi user support significantly, but it would be nice to have this feature in the future.

Cheers,
Jesper

1 Like

This is something we hope to have in the future but as you said it’s not too easy for us to implement at the moment. This is definitely recorded in our internal issue/feature tracker though.

Ali

Sounds great.
I will look forward to this.

Cheers,
Jesper

In the meantime I made a simple hack that sort of works.

The attached bash script puts a reservation in the SLURM queue.
When the resources on the cryoSPARC node is allocated it is put on hold for a given time waiting for a cryoSPARC job to start. On allocations of the node, the user is also be notified by email that the node is on hold and ready for cryoSPARC to be started.

If a cryoSPARC job (or multible jobs) is/are started within the the on hold period, then the script keeps SLURM allocation until the last cryoSPARC job is done.
If no cryoSPARC jobs are started within the on hold period, the allocation is lost thereafter.

The scrips need to be customized to your particular environment, but that should not be to hard.

This hack is of course not a perfect solution as it does not start cryoSPARC jobs directly. Although it brings forth a solution for cryoSPARC to work i a SLURM queuing environment, where the GPU nodes are being used for many other jobs.
I hope somebody can use this while waiting for SLURM submission to be integrated into cryoSPARC.

Cheers,
Jesper

P.S. just found out I can only attach pictures to this site, so here’s a link to the file instead: http://www.bioxray.au.dk/~jelka/Scripts/cryoSPARC_SLURM

1 Like

Just wanted to say we are in a similar situation: we are happy with cryoSPARC so far, however we find it difficult to integrate into our existing pipeline and to use great resources we have already available (namely powerful GPU nodes on our University-hosted cluster).
It would be very useful for many people having cryoSPARC “experiments” available as command-line tools, that we could e.g. submit to the cluster queue or integrate into our on-the-fly processing pipeline managed with FOCUS.

@jelka thanks for this script, our cluster is currently migrating from SGE to SLURM and this may come in handy!