cufftAllocFailed during 2D classification

Hi,

After the particle extraction in Relion and the particle stack import in CryoSparc, I tried the 2D classification, but I get that error message:

[CPU: 11.02 GB] Traceback (most recent call last):
File “cryosparc2_compute/jobs/runcommon.py”, line 1685, in run_with_except_hook
run_old(*args, **kw)
File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 110, in cryosparc2_compute.engine.cuda_core.GPUThread.run
File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 111, in cryosparc2_compute.engine.cuda_core.GPUThread.run
File “cryosparc2_worker/cryosparc2_compute/engine/engine.py”, line 991, in cryosparc2_compute.engine.engine.process.work
File “cryosparc2_worker/cryosparc2_compute/engine/engine.py”, line 109, in cryosparc2_compute.engine.engine.EngineThread.load_image_data_gpu
File “cryosparc2_worker/cryosparc2_compute/engine/gfourier.py”, line 33, in cryosparc2_compute.engine.gfourier.fft2_on_gpu_inplace
File “/srv/home/cryosparc/app/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/fft.py”, line 127, in init
onembed, ostride, odist, self.fft_type, self.batch)
File “/srv/home/cryosparc/app/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cufft.py”, line 742, in cufftMakePlanMany
cufftCheckStatus(status)
File “/srv/home/cryosparc/app/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cufft.py”, line 117, in cufftCheckStatus
raise e
cufftAllocFailed

What can I do to avoid that problem?

I list below some of the paremeters:
Number of classes: 100.
Pixel size: 0.93A.
Number of online-EM iterations: 40
Number of final full iterations: 2
Number of iteration to anneal sigma: 35
And I tried to parallelize two GPUs for the running (normally one is enough), but it didn’t work.

Computing 2D class averages:

[CPU: 5.30 GB] Volume Size: 128 (voxel size 1.74A)

[CPU: 5.30 GB] Zeropadded Volume Size: 256

[CPU: 5.30 GB] Data Size: 240 (pixel size 0.93A)

Anyway I previously run a 2D classification with the same parameters and a similar size dataset, the only difference is the pixel size, the other time it was 1.13A.

Thanks :slight_smile:

Hi @Ulvetanna, this indicates you are running out of GPU memory during 2D classification, please see the below thread regarding box size and memory requirements for 2D Classification: