Manually assigning GPU causes immediate job start even when GPU already in use by CryoSPARC

Hi CryoSPARC team,

I wanted to run a local resolution estimation job on a specific GPU as it has more VRAM than others, but it was already busy with another job. The box size is quite large and I was worried Fourier padding might exceed the VRAM of the other GPUs. Instead of waiting for the running job to complete as I expected, the local resolution job immediately started running simultaneously!

Funnily, this hasn’t resulted in either job crashing (although they are both running more slowly than anticipated)… perhaps because it’s a 48GB GPU and neither job is using enough VRAM to cause the other to have insufficient?

Is this behaviour expected (immediate start on manual assignment)?

Thanks,
R

Yes, this is expected.

I see, thanks.

This post must be at least 20 characters.

I just ran into the same thing on our instance. Unfortunately we had jobs crashing because they ran out of memory.

Is there a way to disable this behavior?

To me it seems counter intuitive to skip the queue when selecting a specific GPU.

We took note of your concern. Unfortunately, it is not currently possible to block Run on Specific GPU on instances that have node-type, in contrast to cluster-type, scheduler lanes with GPUs. You may want to ask users of your CryoSPARC instance to not Run on Specific GPU unless overriding the scheduler is desired and appropriate.

Thanks, I guess I have to change the queues then to offer an alternative way of selecting a specific computing node.

Out of curiosity, why was the GPU selection implemented to skip the resource availability check? I can see a use for both features, but I wouldn’t expect them to be in the same option.