Hi, we had homogeneous refinement failing for 3.1 with:
[CPU: 15.20 GB] Traceback (most recent call last):
File "cryosparc_worker/cryosparc_compute/run.py", line 84, in cryosparc_compute.run.main
File "cryosparc_worker/cryosparc_compute/jobs/refine/newrun.py", line 447, in cryosparc_compute.jobs.refine.newrun.run_homo_refine
File "cryosparc_worker/cryosparc_compute/jobs/refine/newrun.py", line 448, in cryosparc_compute.jobs.refine.newrun.run_homo_refine
File "cryosparc_worker/cryosparc_compute/jobs/ctf_refinement/run_local.py", line 203, in cryosparc_compute.jobs.ctf_refinement.run_local.full_defocus_refine
File "cryosparc_worker/cryosparc_compute/engine/newengine.py", line 312, in cryosparc_compute.engine.newengine.EngineThread.load_models_rspace
File "cryosparc_worker/cryosparc_compute/engine/newgfourier.py", line 152, in cryosparc_compute.engine.newgfourier.rfft3_on_gpu_inplace
File "cryosparc_worker/cryosparc_compute/engine/newgfourier.py", line 71, in cryosparc_compute.engine.newgfourier.get_plan_R2C_3D
File "/home/tomo/Software/cryosparc/cryosparc_worker/cryosparc_compute/skcuda_internal/fft.py", line 134, in __init__
onembed, ostride, odist, self.fft_type, self.batch)
File "/home/tomo/Software/cryosparc/cryosparc_worker/cryosparc_compute/skcuda_internal/cufft.py", line 749, in cufftMakePlanMany
cufftCheckStatus(status)
File "/home/tomo/Software/cryosparc/cryosparc_worker/cryosparc_compute/skcuda_internal/cufft.py", line 124, in cufftCheckStatus
raise e
cryosparc_compute.skcuda_internal.cufft.cufftAllocFailed
So we used non-homogeneous refinement that worked very well.
We’ve recently updated CS to 3.3 (patch included) and tried again the homogeneous refinement, but it failed, this time with:
[CPU: 11.09 GB] Traceback (most recent call last):
File "cryosparc_worker/cryosparc_compute/run.py", line 85, in cryosparc_compute.run.main
File "cryosparc_master/cryosparc_compute/jobs/refine/newrun.py", line 467, in cryosparc_compute.jobs.refine.newrun.run_homo_refine
File "cryosparc_master/cryosparc_compute/jobs/refine/newrun.py", line 468, in cryosparc_compute.jobs.refine.newrun.run_homo_refine
File "cryosparc_worker/cryosparc_compute/jobs/ctf_refinement/run_local.py", line 215, in cryosparc_compute.jobs.ctf_refinement.run_local.full_defocus_refine
File "cryosparc_worker/cryosparc_compute/engine/newengine.py", line 313, in cryosparc_compute.engine.newengine.EngineThread.load_models_rspace
File "cryosparc_worker/cryosparc_compute/engine/cuda_core.py", line 353, in cryosparc_compute.engine.cuda_core.EngineBaseThread.ensure_allocated
File "/home/tomo/Software/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/gpuarray.py", line 210, in __init__
self.gpudata = self.allocator(self.size * self.dtype.itemsize)
pycuda._driver.MemoryError: cuMemAlloc failed: out of memory
We have done a restart as mentioned in another topic but it does not fixed it.
GPU is used in previous iterations without problems.
Any ideas on how to solve this?