Out of memory error when increasing the class size during 2D Classification

I have a set of 900k particles, which I extracted at 352px and downscaled to 196px. Pixel size 0.85A/px.
My 2D classification failes if I increase the number of classes. 300 still works, 400 fails. Downscaling further to 48px doesn’t solve the problem.
The micrographs are located at a mounted server.
Any idea what is causing the error and how to resovle it?

Traceback (most recent call last):
File “/home/phz-17br01/Programme/cryosparc_worker/cryosparc_compute/jobs/runcommon.py”, line 2118, in run_with_except_hook
run_old(*args, **kw)
File “cryosparc_master/cryosparc_compute/engine/cuda_core.py”, line 131, in cryosparc_compute.engine.cuda_core.GPUThread.run
File “cryosparc_master/cryosparc_compute/engine/cuda_core.py”, line 132, in cryosparc_compute.engine.cuda_core.GPUThread.run
File “cryosparc_master/cryosparc_compute/engine/engine.py”, line 1082, in cryosparc_compute.engine.engine.process.work
File “cryosparc_master/cryosparc_compute/engine/engine.py”, line 306, in cryosparc_compute.engine.engine.EngineThread.compute_resid_pow
File “cryosparc_master/cryosparc_compute/engine/cuda_core.py”, line 362, in cryosparc_compute.engine.cuda_core.EngineBaseThread.ensure_allocated
File “/home/phz-17br01/Programme/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/pycuda/gpuarray.py”, line 210, in init
self.gpudata = self.allocator(self.size * self.dtype.itemsize)
pycuda._driver.MemoryError: cuMemAlloc failed: out of memory

@DarioSB This computation requires more memory than the current GPU can provide. For this particular computation, instead of reducing the box size, you may want to try running on a GPU with more VRAM, if possible.

1 Like