Patch CTF Estimation: cuMemAlloc out of memory

Hello,
During the CTF fitting I got o" out of memory error". How to fix that. whole tutorial worked well for the T20S data but for my project I have this error.

[CPU: 515.7 MB] Traceback (most recent call last):
File “cryosparc2_compute/jobs/runcommon.py”, line 1685, in run_with_except_hook
run_old(*args, **kw)
File “/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/threading.py”, line 754, in run
self.__target(*self.__args, **self.__kwargs)
File “cryosparc2_compute/jobs/pipeline.py”, line 165, in thread_work
work = processor.process(item)
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/run.py”, line 115, in cryosparc2_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/run.py”, line 118, in cryosparc2_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/run.py”, line 119, in cryosparc2_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/patchctf.py”, line 66, in cryosparc2_compute.jobs.ctf_estimation.patchctf.patchctf
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/patchctf.py”, line 116, in cryosparc2_compute.jobs.ctf_estimation.patchctf.patchctf
File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 312, in cryosparc2_compute.engine.cuda_core.EngineBaseThread.ensure_allocated
File “/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/gpuarray.py”, line 210, in init
self.gpudata = self.allocator(self.size * self.dtype.itemsize)
MemoryError: cuMemAlloc failed: out of memory

[CPU: 167.8 MB] Outputting partial results now…

[CPU: 165.9 MB] Traceback (most recent call last):
File “cryosparc2_worker/cryosparc2_compute/run.py”, line 85, in cryosparc2_compute.run.main
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/run.py”, line 258, in cryosparc2_compute.jobs.ctf_estimation.run.run
AssertionError: Child process with PID 89438 has terminated unexpectedly!

Hi @GYADAV,

I can help you sort this out. First, a few questions:

  • What model of GPU are you using?
  • Did you use cryoSPARC for motion correcting the movies?

Regards,
Harris from the cryoSPARC team

Thank you Harris,

  1. We are using RTX 2080 Ti.
  2. No, I am using the micrographs corrected in Motioncor2. Should I correct them in cryoSPARC?

Thanks
Gaya

Hi Gaya,

Thanks. The 2080 Tis should have enough memory for this purpose. What’s the resolution of the imported micrographs? Also, could you double-check the pixel size, accelerating voltage, and exposure dose parameters from the import micrographs job that feeds into patch CTF? If those are wrong, patch CTF might use drastically more memory than it otherwise should.

It’s not necessary to use motion correction from within cryoSPARC, I was just asking to help me figure out where the issue might be.

Harris

Hello Harris,
These parameters are as pixel:1.11, kV 300, Cs 2.7, expo 40,
number of CPUs are set to 4, Should I increase the CPUs? As when I decreased the number of micrographs from 2000 to 300, it does run without any error.

Thanks
gaya

Hello Harris,
Another issue about multi-ref experiment. I cannot see it in the list of modules available in the job builder. I also change the config file " export CRYOSPARC_EXPERIMENTAL=true" and restarted the cryosparcm again but still not showing it in the list.
What should I do to make it available in the list?
Thanks
Gaya

Hi @GYADAV,
Multi-reference refinement was available in cryoSPARC v0; in v2 the relevant job type is Heterogeneous Refinement. Can you please confirm that’s the one you’re looking for?
It should be available in the Job Builder by default. Thanks!

Hi Gaya,

That’s strange - the number of micrographs shouldn’t affect the GPU memory usage. Could you repeat your test and make sure no other processes are running on the computer that might be eating up some GPU memory?

Harris

Hello Punjani,
Yes, that is available but that need two or more reference volumes but I want to use one reference volume and classify into multiple classes (something similar to relion and cisTEM) to sort out best particles belong to that class average. Which job type I should use for this?

Thanks
Gaya

@GYADAV in cryoSPARC, heterogeneous refinement does exactly this - just connect the same starting volume multiple times to the heterogeneou refinement job inputs. The job will perform classification around this single reference and resolve multiple classes.

Hello Punjani,
What is this error? It came up after 100 iterations of heterogeneous refinements:

[CPU: 3.07 GB] Done in 4.698s.

[CPU: 3.07 GB] Outputting files…

[CPU: 3.07 GB] Done in 0.842s.

[CPU: 3.07 GB] Done iteration 100 in 49.880s. Total time so far 1810.575s

[CPU: 3.07 GB] – Iteration 101

[CPU: 3.07 GB] Using Full Dataset (split 11606 in A, 11606 in B)

[CPU: 3.07 GB] Using Alignment Radius 55.983 (5.076A)

[CPU: 3.07 GB] Using Reconstruction Radius 89.000 (3.193A)

[CPU: 3.07 GB] Number of BnB iterations 4

[CPU: 3.07 GB] DEV 0 THR 1 NUM 500 TOTAL 4.0929095 ELAPSED 5.3657159 –

[CPU: 4.45 GB] Traceback (most recent call last):
File “cryosparc2_compute/jobs/runcommon.py”, line 1685, in run_with_except_hook
run_old(*args, **kw)
File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 110, in cryosparc2_compute.engine.cuda_core.GPUThread.run
File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 111, in cryosparc2_compute.engine.cuda_core.GPUThread.run
File “cryosparc2_worker/cryosparc2_compute/engine/engine.py”, line 991, in cryosparc2_compute.engine.engine.process.work
File “cryosparc2_worker/cryosparc2_compute/engine/engine.py”, line 109, in cryosparc2_compute.engine.engine.EngineThread.load_image_data_gpu
File “cryosparc2_worker/cryosparc2_compute/engine/gfourier.py”, line 33, in cryosparc2_compute.engine.gfourier.fft2_on_gpu_inplace
File “/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/fft.py”, line 127, in init
onembed, ostride, odist, self.fft_type, self.batch)
File “/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cufft.py”, line 742, in cufftMakePlanMany
cufftCheckStatus(status)
File “/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cufft.py”, line 117, in cufftCheckStatus
raise e
cufftAllocFailed

Thanks
Gaya

Is it because some of the GPUs were used by another program? Though I assigned cryoSPARC to use specific GPU that was not used by other program.

Thanks
Gaya

Hello All,
How can I save results in manually Curate Exposures? I select the fit res limits and save it (top right corner) but when I hit done it selected all the exposures.

Thanks
Gaya

Hi Gaya,

Is it because some of the GPUs were used by another program?

That’s the most likely explanation, yes. Try running again while nothing else is running.

How can I save results in manually Curate Exposures? I select the fit res limits and save it (top right corner) but when I hit done it selected all the exposures.

After you’ve picked your fit res limits, click the checkmark next to “Selection” at the top above the plot. You should see the indices of the accepted micrographs turn green. From there, you can press “done” to complete the job.

– Harris