How can I bin movies during motion correction?

The pixel size of my K3 compressed movies is 0.56 Å with TIFF format (size ~500 MB).
The 3D result was not good as I expected since I did motion correction separately outside of cryosparc.
So I am trying to bin the images two times while it is getting motion-corrected inside cryosparc.
I couldn’t see this option in the ‘Full-frame motion correction (multi)’. I also tried “Patch motion correction (multi)”, and it didn’t work.
We have two NVIDIA GeForce RTX 2070 (8 GB GPU Memory each). The ‘nvidia-smi -l 1’ command indicates that ~7.5GB of each GPU memory are being used.
My understanding is that full-frame motion correction is kind of outdated and has memory issues with K3 movies.
Is there a way to bin the images (0.56 Å with TIFF format) while it is getting motion corrected inside cryosparc in my case?

Hi @Yifan,

Please try running Patch Motion Correction while using the Output F-crop factor parameter to bin your movies.
Also we’ve had trouble with K3 movies running on GPU’s with 8GBs of memory, we’ve noticed GPU Memory usage go up to 9-10GB.

Hi Sarulthasan,

Thanks. I tested with your suggestions and it still didn’t work. It gave the error:

Traceback (most recent call last):
File “cryosparc2_compute/jobs/runcommon.py”, line 1481, in run_with_except_hook
run_old(*args, **kw)
File “/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/threading.py”, line 754, in run
self.__target(*self.__args, **self.__kwargs)
File “cryosparc2_compute/jobs/pipeline.py”, line 53, in stage_target
work = processor.process(item)
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 146, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 149, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 150, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py”, line 77, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py”, line 393, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction
File “cryosparc2_worker/cryosparc2_compute/engine/newgfourier.py”, line 22, in cryosparc2_compute.engine.newgfourier.get_plan_R2C_2D
File “/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/fft.py”, line 126, in init
onembed, ostride, odist, self.fft_type, self.batch)
File “/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cufft.py”, line 741, in cufftMakePlanMany
cufftCheckStatus(status)
File “/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cufft.py”, line 116, in cufftCheckStatus
raise e
cufftAllocFailed

I am waiting for our new workstation with four GPUs (11 GB each) and will try again.

Hello,
I have a similar problem. I have 2080Ti cards with as much memory as I should need, but apparently not enough.
Same superres movies, with f-crop set to 1/2

Traceback (most recent call last):
File “cryosparc2_compute/jobs/runcommon.py”, line 1481, in run_with_except_hook
run_old(*args, **kw)
File “/share/apps/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/threading.py”, line 754, in run
self.__target(*self.__args, **self.__kwargs)
File “cryosparc2_compute/jobs/pipeline.py”, line 53, in stage_target
work = processor.process(item)
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 146, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 149, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 150, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py”, line 77, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py”, line 101, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction
File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 312, in cryosparc2_compute.engine.cuda_core.EngineBaseThread.ensure_allocated
File “/share/apps/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/gpuarray.py”, line 210, in init
self.gpudata = self.allocator(self.size * self.dtype.itemsize)
MemoryError: cuMemAlloc failed: out of memory

Hi @stephan,

We tried Patch Motion correction with F-crop parameter set to 1/2 on a “Cryosparc” workstation by Exxact which is equipped with 4 x NVIDIA RTX 2080Tis (with 192 GB of RAM) and we are still getting the same error:

 Traceback (most recent call last): File "cryosparc2_compute/jobs/runcommon.py", line 1481, in run_with_except_hook run_old(*args, **kw) File "/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/threading.py", line 754, in run self.__target(*self.__args, **self.__kwargs) File "cryosparc2_compute/jobs/pipeline.py", line 53, in stage_target work = processor.process(item) File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py", line 146, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py", line 149, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py", line 150, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py", line 77, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py", line 396, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction File "cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py", line 312, in cryosparc2_compute.engine.cuda_core.EngineBaseThread.ensure_allocated File "/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/gpuarray.py", line 210, in __init__ self.gpudata = self.allocator(self.size * self.dtype.itemsize) MemoryError: cuMemAlloc failed: out of memory

Traceback (most recent call last): File "cryosparc2_compute/jobs/runcommon.py", line 1481, in run_with_except_hook run_old(*args, **kw) File "/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/threading.py", line 754, in run self.__target(*self.__args, **self.__kwargs) File "cryosparc2_compute/jobs/pipeline.py", line 53, in stage_target work = processor.process(item) File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py", line 146, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py", line 149, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py", line 150, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py", line 77, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py", line 446, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction File "cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py", line 197, in cryosparc2_compute.engine.cuda_core.transfer_ndarray_to_cudaarray MemoryError: cuArrayCreate failed: out of memory

Traceback (most recent call last): File "cryosparc2_compute/jobs/runcommon.py", line 1481, in run_with_except_hook run_old(*args, **kw) File "/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/threading.py", line 754, in run self.__target(*self.__args, **self.__kwargs) File "cryosparc2_compute/jobs/pipeline.py", line 53, in stage_target work = processor.process(item) File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py", line 146, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py", line 149, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py", line 150, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py", line 77, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py", line 446, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction File "cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py", line 197, in cryosparc2_compute.engine.cuda_core.transfer_ndarray_to_cudaarray MemoryError: cuArrayCreate failed: out of memory

Traceback (most recent call last): File "cryosparc2_compute/jobs/runcommon.py", line 1481, in run_with_except_hook run_old(*args, **kw) File "/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/threading.py", line 754, in run self.__target(*self.__args, **self.__kwargs) File "cryosparc2_compute/jobs/pipeline.py", line 53, in stage_target work = processor.process(item) File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py", line 146, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py", line 149, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py", line 150, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py", line 77, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py", line 446, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction File "cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py", line 197, in cryosparc2_compute.engine.cuda_core.transfer_ndarray_to_cudaarray MemoryError: cuArrayCreate failed: out of memory

Also, we tried the same job with F-crop parameter set to 1/4 and it gave the error (see below) but then proceeded to motion correct 4 out of the 5 movies without going to completion and being stuck in some kind of limbo:
Traceback (most recent call last): File "cryosparc2_compute/jobs/runcommon.py", line 1481, in run_with_except_hook run_old(*args, **kw) File "/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/threading.py", line 754, in run self.__target(*self.__args, **self.__kwargs) File "cryosparc2_compute/jobs/pipeline.py", line 53, in stage_target work = processor.process(item) File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py", line 146, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py", line 149, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py", line 150, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py", line 77, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction File "cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py", line 446, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction File "cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py", line 197, in cryosparc2_compute.engine.cuda_core.transfer_ndarray_to_cudaarray MemoryError: cuArrayCreate failed: out of memory

Is there a work around to this?

Regards,
Kavita

Hi @adesgeorges, @Kavita,

Can you report the number of frames in your movies, and the total exposure electrons/A^2?

Can you try the following:

  1. Create a patch-motion job with the K3 superres movies as input
  2. set the F-crop factor to 1/2
  3. toggle the advanced parameters switch (top right of the job builder)
  4. set the Max alignment resolution to 8A or 10A
  5. set the override knots X to 5, override knots Y to 3, override knots Z to 3

Please let us know if that works. The motion trajectory estimates will be worse with these settings, but hopefully will run. We are working on optimizing the memory usage of patch-motion for K3 superres data so we can retain standard parameters with 11GB GPU RAM.

is there any advantage to fourier cropping super-res data by 1/2?

Does the cropping happen before or after alignment? I imagine if it happens before alignment then that might help motion correction due to increased S/N.

I ask because I just re-aligned everything using patch motion and only now realized I could have cropped it by 1/2. I normally DO crop by 1/2 when doing motioncorr2 alignment.

thanks!

IMO, there is a huge advantage to downsampling the superres data by 1/2 - all the images are 1/4th as large! Every step of initial processing is 4x faster, etc. Typically a superres pixel size is ~0.5 A, so the binned pixel size of 1A gives a Nyquist of 2A. You can always re-correct if your resolution gets limited at 2A…

In motioncor2 and Relion, the cropping is after alignment. The idea is the best alignment precision is at superres.

In patch motion (and all other motion correction), Fourier-cropping is always applied after alignment. So the full frames/patches are used for alignment, and the full corrected image (after motion interpolation) is generated at the raw pixel size in memory, then this is Fourier-cropped by the desired factor.
Definitely agree with @DanielAsarnow - unless you’re shooting for very high res beyond physical Nyquist, always F-crop super-res data by 1/2

For everyone experiencing these memory allocation failures, a patch is now available that will address this problem. After you have updated to CryoSPARC v2.15, please see the following section of the new CryoSPARC guide regarding how to apply the patch (depending on whether your setup is single or multiple nodes)
https://guide.cryosparc.com/setup-configuration-and-management/software-updates#apply-patches

After the patch is applied, these jobs should complete successfully. You’ll notice a new switch in the job builder for patch motion correction, called “Reduce GPU memory usage”. It is on by default, but we consider this a beta feature and if you have any trouble with it, you can turn it off and revert to the previous behaviour.

My patch-motion correction job for 4300 K3 movies at 0.5 Å
run with Output f-crop 1/2 ran without problem. But it seems as if the new 1 Å pixel size is not passed on to Patch-CTF and jobs after. Did anyone experience this? How does one check that the new pixel size is 1 instead of 0.5?

Thanks

Hi @Ricky,

Unfortunately there’s no easy way to check this, but the pixel size that’s passed on from patch motion correction jobs does get adjusted internally to account for f-crop. Could you describe what you’re experiencing? It may be that something else is going on but it appears that the pixel size is being passed along incorrectly. Alternatively, you could try re-building both the motion correction and ctf jobs and seeing if your problems persist.

Hi
I have found this old conversation and it’s really helpful for me.
I have used f-crop 1/2, alignment res. 3 Å, and the overrides as 5, 3, 3. However the 2D classes look misaligned (apparently, not sure why). I tried to redo with no overriding (x,y,z left blank) but PATCH MotCorr did not work.
I was just wondering, do the override values have to be 5,3,3 or there is possibility to fiddle with these values?
Thanks
Saif