Global CTF runs out of memory

Dear Cryosparc community,

I keep getting out of memory errors for global CTF refinement. plese see error below:

I am using a workstation with 4 x RTX2080 nvidia cards each with 8GB memory. I am only using 84000 particles and a box size of 480. I have around 384 GB of memory.

Best,
Omid

Error:
------------------- Global CTF refinement iteration 1 of 1

Starting particle processing…

Processing particles from group 1 (1 of 1)

batch 1 of 85

Traceback (most recent call last):
File “cryosparc2_worker/cryosparc2_compute/run.py”, line 78, in cryosparc2_compute.run.main
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_refinement/run.py”, line 143, in cryosparc2_compute.jobs.ctf_refinement.run.run
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_refinement/run.py”, line 163, in cryosparc2_compute.jobs.ctf_refinement.run.full_ctf_refine
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_refinement/run.py”, line 411, in cryosparc2_compute.jobs.ctf_refinement.run.compute_phase_errors
File “cryosparc2_worker/cryosparc2_compute/engine/newengine.py”, line 802, in cryosparc2_compute.engine.newengine.EngineThread.accumulate_anisomag
File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 312, in cryosparc2_compute.engine.cuda_core.EngineBaseThread.ensure_allocated
File “/data/CRYOSPARC/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/gpuarray.py”, line 210, in init
self.gpudata = self.allocator(self.size * self.dtype.itemsize)
MemoryError: cuMemAlloc failed: out of memory

Hi Omid, yep it needs a lot of GPU memory - I have the 2080Ti with 11GB, and it runs out of GPU mem with box size >=512px. It would be great to have at least the option of performing this on the CPU for particles with large box size

You can also perform similar refinements in relion 3.1 using CPU and then import into cryosparc, now that csparc can read beam tilt etc

Cheers
Oli

Hi Oliver,

Good to know… I’ll give that a go then. I am sure it will be optimized in future releases.

Best,
Omid

@Omid @olibclarke, thanks for reporting, we are working on optimizing memory requirements for this job type!

Any updates on this? or any workarounds? I’m experimenting with a box size of 600 and would like to avoid using the on-the-fly option.