Estimated GPU memory requirements?


Would it be possible to show the estimated peak GPU memory requirements for a job, given the job parameters, and perhaps display a warning flag if this exceeds the total memory on the GPU?

I’m trying to run a large multireference refinement, and it keeps giving me a CUDA memAlloc error - I am progressively reducing the number of classes, but it would be nice to have an estimate prior to job submission if that is possible. Would also help when submitting a job to a GPU that already has a relion job running on it, for example.


Of course Ali may correct me, but I assume it’s the same as the normal requirement times the number of classes. That is, box size cubed times 20 times number of classes.

That means 768 px box will just fit in a 12GB card for normal refine. It suggests that if the box size is 384, then 8 classes should work (on 12GB card). Maybe the scaling is a little worse in multirefine.

That’s the thing, that’s what I’d think, but the default box size for multirefine is 128px, and I still can’t run it with more than 8 classes without running out of GPU memory - that’s why I was asking.