I wanted to run a local resolution estimation job on a specific GPU as it has more VRAM than others, but it was already busy with another job. The box size is quite large and I was worried Fourier padding might exceed the VRAM of the other GPUs. Instead of waiting for the running job to complete as I expected, the local resolution job immediately started running simultaneously!
Funnily, this hasn’t resulted in either job crashing (although they are both running more slowly than anticipated)… perhaps because it’s a 48GB GPU and neither job is using enough VRAM to cause the other to have insufficient?
Is this behaviour expected (immediate start on manual assignment)?
We took note of your concern. Unfortunately, it is not currently possible to block Run on Specific GPU on instances that have node-type, in contrast to cluster-type, scheduler lanes with GPUs. You may want to ask users of your CryoSPARC instance to not Run on Specific GPU unless overriding the scheduler is desired and appropriate.
Thanks, I guess I have to change the queues then to offer an alternative way of selecting a specific computing node.
Out of curiosity, why was the GPU selection implemented to skip the resource availability check? I can see a use for both features, but I wouldn’t expect them to be in the same option.
… can be implemented by creating scheduler lanes that each have a single, specific target node. Selection of a specific node is not the intended use case of the Run on Specific GPU option.
Still, I think an option to skip the queue should be reserved for Admin users or users that are also allowed to set job priorities. Also since it is not really intuitive that selecting a specific GPU skips the resource checks, it is easy to abuse this unintentionally.
That’s why I am asking for the reason behind this implementation.