Raise ecufftAllocFailed during local motion correction

Hi,

We have many cases where we end up having Memory issues. This one, for example, is happening during local Motion Correction:

Traceback (most recent call last):
File “cryosparc2_compute/jobs/runcommon.py”, line 747, in run_with_except_hook
run_old(*args, **kw)
File “/gpfs/data/bhabhaekiertlabs/local_software/CryoSparc/cryosparc2_worker/deps/anaconda/lib/python2.7/threading.py”, line 754, in run
self.__target(*self.__args, **self.__kwargs)
File “cryosparc2_compute/jobs/pipeline.py”, line 53, in stage_target
work = processor.process(item)
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_local.py”, line 186, in cryosparc2_compute.jobs.motioncorrection.run_local.run_local_motion_correction_multi.motionworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_local.py”, line 198, in cryosparc2_compute.jobs.motioncorrection.run_local.run_local_motion_correction_multi.motionworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_local.py”, line 199, in cryosparc2_compute.jobs.motioncorrection.run_local.run_local_motion_correction_multi.motionworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/motioncorrection.py”, line 460, in cryosparc2_compute.jobs.motioncorrection.motioncorrection.motion_correction
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/motioncorrection.py”, line 475, in cryosparc2_compute.jobs.motioncorrection.motioncorrection.motion_correction
File “/gpfs/data/bhabhaekiertlabs/local_software/CryoSparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/fft.py”, line 126, in init
onembed, ostride, odist, self.fft_type, self.batch)
File “/gpfs/data/bhabhaekiertlabs/local_software/CryoSparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cufft.py”, line 741, in cufftMakePlanMany
cufftCheckStatus(status)
File “/gpfs/data/bhabhaekiertlabs/local_software/CryoSparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cufft.py”, line 116, in cufftCheckStatus
raise e
cufftAllocFailed

We used 500 pixel box size, which should work (in the past, we processed a dataset with 600 pixel box size). The only difference is that the images are more dense (>800 particles per image in some cases).

We tried reducing the box size to 480 pixels, and it worked, but ultimately, it feels like decreasing the box size until it works is not optimal. What would be the best other solution, rather than decreasing the box size?

Thanks,
Best,
Nicolas

Hi @ncoudray, thanks for pointing this out!
We think we’ve isolated the cause for this. It does have to do with the number of particles per movie. We will be creating a fix for this.

1 Like

Hi,

I think I am having the same issue with my data. I also have dense images with >800 particles per image. Lowering the box size helps but I wonder if you figured this out already.

Thanks,
Lior