cuMemAlloc failed: out of memory

Hello,

we are using Cuda 11.3 on CentOS 7.9 with NVIDIA Quadro P1000 and cryosparc v3.2.0. When we tried follow tutorial [https://guide.cryosparc.com/processing-data/cryo-em-data-processing-in-cryosparc-introductory-tutorial] we get following error.

[CPU: 206.8 MB]  Error occurred while processing J2/imported/14sep05c_00024sq_00003hl_00002es.frames.tif
Traceback (most recent call last):
  File "/software/cryosparc/cryosparc_worker/cryosparc_compute/jobs/pipeline.py", line 59, in exec
    return self.process(item)
  File "cryosparc_worker/cryosparc_compute/jobs/motioncorrection/run_patch.py", line 190, in cryosparc_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process
  File "cryosparc_worker/cryosparc_compute/jobs/motioncorrection/run_patch.py", line 193, in cryosparc_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process
  File "cryosparc_worker/cryosparc_compute/jobs/motioncorrection/run_patch.py", line 195, in cryosparc_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.motionworker.process
  File "cryosparc_worker/cryosparc_compute/jobs/motioncorrection/patchmotion.py", line 255, in cryosparc_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction
  File "cryosparc_worker/cryosparc_compute/jobs/motioncorrection/patchmotion.py", line 598, in cryosparc_compute.jobs.motioncorrection.patchmotion.unbend_motion_correction
  File "cryosparc_worker/cryosparc_compute/engine/cuda_core.py", line 353, in cryosparc_compute.engine.cuda_core.EngineBaseThread.ensure_allocated
  File "/software/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/gpuarray.py", line 210, in __init__
    self.gpudata = self.allocator(self.size * self.dtype.itemsize)
pycuda._driver.MemoryError: cuMemAlloc failed: out of memory

Marking J2/imported/14sep05c_00024sq_00003hl_00002es.frames.tif as incomplete and continuing...

Is problem in not insufficient memory of out GPU (4GB)? See nvidia-smi output below.

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 465.19.01    Driver Version: 465.19.01    CUDA Version: 11.3     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA Quadro P...  On   | 00000000:07:00.0 Off |                  N/A |
| 34%   30C    P8    N/A /  N/A |    268MiB /  4037MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      3446      G   /usr/bin/X                        208MiB |
|    0   N/A  N/A      9470      G   /usr/bin/gnome-shell               35MiB |
|    0   N/A  N/A     10183      G   ...AAAAAAAAA= --shared-files       19MiB |
+-----------------------------------------------------------------------------+

Computers has 32GB RAM and 12 VCPU.

Can you help us, please?

Best,
Michal

Hi @debuggercz,

Unfortunately Quadro P1000 cards do not have enough memory to do much work in cryosparc. The minimum amount of VRAM required to be able to process typical datasets is 11GB, whereas the P1000 only has 4GB.

Harris

Hi @hsnyder,

thank you for answer. We assumed the memory of GPU probably wouldn’t be enough.

Michal