Hi,
I am running CryoSPARC 4.5.3 (we would need to update it) on a cluster supporting SLURM job scheduler. Submission of CryoSPARC jobs through the option “Queue to Lane” works well. Besides submitting jobs to the queue, we now support logging in to the compute nodes, where users can start a CryoSPARC session and utilise GPUs available on that node. With the “Queue to Lane” option, they wouldn’t use the resources on that node, but instead would queue the job to another compute node.
I read that one could use the option “Run on specific GPU”, but in this case, the button “Queue” is not active. I think CryoSPARC is aware of the GPU cards on that node, because if I execute “cryosparcw gpulist”, I see the correct output related to the H100 cards on that compute node.
Something that has worked up to now is typing on a terminal of the compute node the commands for CryoSPARC executable and the name of the job (https://guide.cryosparc.com/processing-data/get-started-with-cryosparc-introductory-tutorial#step-3-download-the-tutorial-dataset for Patch motion correction step):
$CRYOSPARC_WORKER_BIN_PATH run --project P3 --job J3 --master_hostname compute.node --master_command_core_port 38000
the job is then caught by the session in the web browser on the compute node. Although this works, I wanted to know if you have a better and cleaner solution, maybe by tweaking some variables and getting the “Run on specific GPU” option working on the GUI. Or maybe there is another way to work on-site with the resources (CPU/GPU) of the compute nodes?