How can I bin movies during motion correction?

open

#1

The pixel size of my K3 compressed movies is 0.56 Å with TIFF format (size ~500 MB).
The 3D result was not good as I expected since I did motion correction separately outside of cryosparc.
So I am trying to bin the images two times while it is getting motion-corrected inside cryosparc.
I couldn’t see this option in the ‘Full-frame motion correction (multi)’. I also tried “Patch motion correction (multi)”, and it didn’t work.
We have two NVIDIA GeForce RTX 2070 (8 GB GPU Memory each). The ‘nvidia-smi -l 1’ command indicates that ~7.5GB of each GPU memory are being used.
My understanding is that full-frame motion correction is kind of outdated and has memory issues with K3 movies.
Is there a way to bin the images (0.56 Å with TIFF format) while it is getting motion corrected inside cryosparc in my case?


#2

Hi @Yifan,

Please try running Patch Motion Correction while using the Output F-crop factor parameter to bin your movies.
Also we’ve had trouble with K3 movies running on GPU’s with 8GBs of memory, we’ve noticed GPU Memory usage go up to 9-10GB.


#3

Hi Sarulthasan,

Thanks. I tested with your suggestions and it still didn’t work. It gave the error:

Traceback (most recent call last):
File “cryosparc2_compute/jobs/runcommon.py”, line 1481, in run_with_except_hook
run_old(*args, **kw)
File “/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/threading.py”, line 754, in run
self.__target(*self.__args, **self.__kwargs)
File “cryosparc2_compute/jobs/pipeline.py”, line 53, in stage_target
work = processor.process(item)
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 146, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 149, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 150, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py”, line 77, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py”, line 393, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction
File “cryosparc2_worker/cryosparc2_compute/engine/newgfourier.py”, line 22, in cryosparc2_compute.engine.newgfourier.get_plan_R2C_2D
File “/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/fft.py”, line 126, in init
onembed, ostride, odist, self.fft_type, self.batch)
File “/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cufft.py”, line 741, in cufftMakePlanMany
cufftCheckStatus(status)
File “/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cufft.py”, line 116, in cufftCheckStatus
raise e
cufftAllocFailed

I am waiting for our new workstation with four GPUs (11 GB each) and will try again.


#4

Hello,
I have a similar problem. I have 2080Ti cards with as much memory as I should need, but apparently not enough.
Same superres movies, with f-crop set to 1/2

Traceback (most recent call last):
File “cryosparc2_compute/jobs/runcommon.py”, line 1481, in run_with_except_hook
run_old(*args, **kw)
File “/share/apps/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/threading.py”, line 754, in run
self.__target(*self.__args, **self.__kwargs)
File “cryosparc2_compute/jobs/pipeline.py”, line 53, in stage_target
work = processor.process(item)
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 146, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 149, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 150, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py”, line 77, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction
File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py”, line 101, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction
File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 312, in cryosparc2_compute.engine.cuda_core.EngineBaseThread.ensure_allocated
File “/share/apps/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/gpuarray.py”, line 210, in init
self.gpudata = self.allocator(self.size * self.dtype.itemsize)
MemoryError: cuMemAlloc failed: out of memory


#5

Hi @sarulthasan,

We tried Patch Motion correction with F-crop parameter set to 1/2 on a “Cryosparc” workstation by Exxact which is equipped with 4 x NVIDIA RTX 2080Tis (with 192 GB of RAM) and we are still getting the same error:

Traceback (most recent call last): File “cryosparc2_compute/jobs/runcommon.py”, line 1481, in run_with_except_hook run_old(*args, **kw) File “/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/threading.py”, line 754, in run self.__target(*self.__args, **self.__kwargs) File “cryosparc2_compute/jobs/pipeline.py”, line 53, in stage_target work = processor.process(item) File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 146, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 149, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 150, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py”, line 77, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py”, line 396, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 312, in cryosparc2_compute.engine.cuda_core.EngineBaseThread.ensure_allocated File “/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/gpuarray.py”, line 210, in init self.gpudata = self.allocator(self.size * self.dtype.itemsize) MemoryError: cuMemAlloc failed: out of memory

Traceback (most recent call last): File “cryosparc2_compute/jobs/runcommon.py”, line 1481, in run_with_except_hook run_old(*args, **kw) File “/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/threading.py”, line 754, in run self.__target(*self.__args, **self.__kwargs) File “cryosparc2_compute/jobs/pipeline.py”, line 53, in stage_target work = processor.process(item) File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 146, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 149, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 150, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py”, line 77, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py”, line 446, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 197, in cryosparc2_compute.engine.cuda_core.transfer_ndarray_to_cudaarray MemoryError: cuArrayCreate failed: out of memory

Traceback (most recent call last): File “cryosparc2_compute/jobs/runcommon.py”, line 1481, in run_with_except_hook run_old(*args, **kw) File “/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/threading.py”, line 754, in run self.__target(*self.__args, **self.__kwargs) File “cryosparc2_compute/jobs/pipeline.py”, line 53, in stage_target work = processor.process(item) File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 146, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 149, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 150, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py”, line 77, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py”, line 446, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 197, in cryosparc2_compute.engine.cuda_core.transfer_ndarray_to_cudaarray MemoryError: cuArrayCreate failed: out of memory

Traceback (most recent call last): File “cryosparc2_compute/jobs/runcommon.py”, line 1481, in run_with_except_hook run_old(*args, **kw) File “/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/threading.py”, line 754, in run self.__target(*self.__args, **self.__kwargs) File “cryosparc2_compute/jobs/pipeline.py”, line 53, in stage_target work = processor.process(item) File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 146, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 149, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 150, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py”, line 77, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py”, line 446, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 197, in cryosparc2_compute.engine.cuda_core.transfer_ndarray_to_cudaarray MemoryError: cuArrayCreate failed: out of memory

Also, we tried the same job with F-crop parameter set to 1/4 and it gave the error (see below) but then proceeded to motion correct 4 out of the 5 movies without going to completion and being stuck in some kind of limbo:
Traceback (most recent call last): File “cryosparc2_compute/jobs/runcommon.py”, line 1481, in run_with_except_hook run_old(*args, **kw) File “/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/threading.py”, line 754, in run self.__target(*self.__args, **self.__kwargs) File “cryosparc2_compute/jobs/pipeline.py”, line 53, in stage_target work = processor.process(item) File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 146, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 149, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 150, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py”, line 77, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction File “cryosparc2_worker/cryosparc2_compute/jobs/motioncorrection/patchmotion.py”, line 446, in cryosparc2_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 197, in cryosparc2_compute.engine.cuda_core.transfer_ndarray_to_cudaarray MemoryError: cuArrayCreate failed: out of memory

Is there a work around to this?

Regards,
Kavita


#6

Hi @adesgeorges, @Kavita,

Can you report the number of frames in your movies, and the total exposure electrons/A^2?

Can you try the following:

  1. Create a patch-motion job with the K3 superres movies as input
  2. set the F-crop factor to 1/2
  3. toggle the advanced parameters switch (top right of the job builder)
  4. set the Max alignment resolution to 8A or 10A
  5. set the override knots X to 5, override knots Y to 3, override knots Z to 3

Please let us know if that works. The motion trajectory estimates will be worse with these settings, but hopefully will run. We are working on optimizing the memory usage of patch-motion for K3 superres data so we can retain standard parameters with 11GB GPU RAM.