cufftAllocFailed in 2D classification

Hi all,

After the particle extraction in CryoSparc, I did the 2D classification. There was an error as belows:

Traceback (most recent call last):
File “cryosparc2_compute/jobs/runcommon.py”, line 1481, in run_with_except_hook
run_old(*args, **kw)
File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 110, in cryosparc2_compute.engine.cuda_core.GPUThread.run
File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 111, in cryosparc2_compute.engine.cuda_core.GPUThread.run
File “cryosparc2_worker/cryosparc2_compute/engine/engine.py”, line 991, in cryosparc2_compute.engine.engine.process.work
File “cryosparc2_worker/cryosparc2_compute/engine/engine.py”, line 109, in cryosparc2_compute.engine.engine.EngineThread.load_image_data_gpu
File “cryosparc2_worker/cryosparc2_compute/engine/gfourier.py”, line 33, in cryosparc2_compute.engine.gfourier.fft2_on_gpu_inplace
File “/data/home/cryosparc_user/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/fft.py”, line 126, in init
onembed, ostride, odist, self.fft_type, self.batch)
File “/data/home/cryosparc_user/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cufft.py”, line 741, in cufftMakePlanMany
cufftCheckStatus(status)
File “/data/home/cryosparc_user/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cufft.py”, line 116, in cufftCheckStatus
raise e
cufftAllocFailed

What can I do to fix this?

Thanks
Shasha

Hi @Shasha,

What was the box size of the particles, and the number of classes?

The error means that the GPU is running out of GPU memory during 2D classification. This would happen if the box size is very very large (unlikely) or if there is another process using the GPU at the same time (e.g Relion, etc).

You can check for other processes using the command line tool

nvidia-smi

Hi @apunjani,

We encountered this error while using a box size of 972 and 50 classes. How would we calculate the needed memory for 2D classification so we are able to make more informed decisions regarding whether a certain box size and number of classes will fit in the GPU memory?

HI @clil16,

In your case, the particles had a box size of 972? What was the pixel size?
2D classification actually already downsamples particles on the fly and performs 2D classification with a smaller box size. You can see the box size that was used in the streamlog:

[CPU: 979.9 MB]  Using 50 classes.
[CPU: 990.4 MB]  Computing 2D class averages: 
[CPU: 990.4 MB]    Volume Size: 256 (voxel size 2.47A)
[CPU: 990.4 MB]    Zeropadded Volume Size: 512
[CPU: 990.4 MB]    Data Size: 450 (pixel size 1.41A)
[CPU: 990.4 MB]    Using Reconstruction Resolution: 6.00A (105.0 radius)

the “volume size” indicates this. You can set the “maximum resolution” parameter to change this box size. To make the processing even more memory efficient, you can run the downsample particles job first, since it’s unlikely you need 972 box size full resolution images at the 2D classification stage.

Unfortunately there isn’t currently an easy way to determine ahead of time the memory requirement for 2D classification - we are working on optimizations in this regard.

The pixel size was 0.98 . The user had exported the particle stacks from cisTEM and was able to continue processing in cryoSPARC once they downsampled their particles to 640.