Hi, is there a way to queue jobs on specific GPUs but not having them all run at the same time?
On our workstation we have different users, for both Relion and cryosparc, and depending on the workload we book the use of specific GPUs.
Unfortunately the “Queue directly to GPU” option overrides the scheduler, so it is not possible to queue multiple successive jobs (for example overnight).
Is there a way to specify the GPU to use and also schedule the jobs according to the resources available?
As a workaround (albeit an inelegant one), have you considered setting up separate worker lanes for each GPU?
You can get around the issue of hostname duplication with ssh_config host aliases. There are oddities in how cache-locks are (not) honoured under such conditions, e.g. when jobs sent to separate lanes happen to cache the same input data to ssdpath. However, if this is a common occurrence in your workflow, you can specify a unique ssdpath for each worker lane as well.
Alternatively, if you only ever have one cryoSPARC user sharing the workstation with RELION jobs, you can have a single worker lane for which the gpu configuration can be updated when required.
More than a year passed and I would like to ask you if there is any easy solution in place for standalone workstations as discussed above… I understand that for clusters lane configuration is working well.
I do not think so. The challenging part would be to configure the cluster manager and job template(s) to fit your needs. You could then use a commands like cryosparcm cluster connect of your existing CryoSPARC installation to update the lane and target configurations that are stored in the CryoSPARC database.