Out of memory error in homogenous refinement

Dear community,

I am processing a particle set with box size 1000. When doing homogenous refinement with optimize per-particle defocus and per-group CTF params on, I keep getting out of memory errors after a few iterations. I am using the maximum memory that my facility is willing to give (8GPUs and 376G memory). Is there a way to reduce the memory usage?

Thanks a lot in advance.

Best wishes,
Haoming

Error
pycuda._driver.MemoryError: cuMemAlloc failed: out of memory

Just to double-check, are you being limited by the pixel size? Is the resolution from the refinement hitting the Nyquist resolution?

The refinement jobs in cryoSPARC are not paralleled over multiple GPU cards, so you are limited by how much memory a single card that’s running the job has. For example, our RTX3090 cards (24GB memory) can process 600px particles, but our RTX2080 Tis (11GB) can’t.

Unless you must use the box size of 1000, I would downsample the particles to a size that allows the job to run without limiting the resolution.

Best,
Kookjoo

1 Like