Memory optimization tricks for 900 px particles 2D classification

Hi everyone,

what are the community tips for memory optimization for large particles? Say, 700-900 pixels (yes, this is the actual particle size).
I have 4x 2080 Ti (12 Gb) on board, with 256 Gb RAM, with the CryoSPARC 3.2 and the most recent update.

What I’ve tried:

  • setting number of particles to smaller amount (200k --> 10k)
  • reducing number of classes (50 --> 10)
  • reducing “batches per class” parameter (100 --> 25)
  • reducing “Zero pad factor” (default --> 1.0)

I’m still getting this:

[CPU: 16.44 GB]  Traceback (most recent call last):
  File "/opt/cryosparc/cryosparc_worker/cryosparc_compute/jobs/", line 1790, in run_with_except_hook
    run_old(*args, **kw)
  File "cryosparc_worker/cryosparc_compute/engine/", line 131, in
  File "cryosparc_worker/cryosparc_compute/engine/", line 132, in
  File "cryosparc_worker/cryosparc_compute/engine/", line 1027, in
  File "cryosparc_worker/cryosparc_compute/engine/", line 106, in cryosparc_compute.engine.engine.EngineThread.load_image_data_gpu
  File "cryosparc_worker/cryosparc_compute/engine/", line 32, in cryosparc_compute.engine.gfourier.fft2_on_gpu_inplace
  File "/opt/cryosparc/cryosparc_worker/cryosparc_compute/skcuda_internal/", line 134, in __init__
    onembed, ostride, odist, self.fft_type, self.batch)
  File "/opt/cryosparc/cryosparc_worker/cryosparc_compute/skcuda_internal/", line 749, in cufftMakePlanMany
  File "/opt/cryosparc/cryosparc_worker/cryosparc_compute/skcuda_internal/", line 124, in cufftCheckStatus
    raise e

as an output, regardless of using 4 or 1 GPUs. The difference is, if I throw the full 200k particles set with default parameters, I get the error immediately, and in case of the above parameters – after some number of iterations (19, to be precise).

And, if it’s, for example, the answer is “reduce the number of particles further (to 1k, for example)” – how do I manage these within cryosparc? From now, I imagine I’d have to run 200 separate jobs manually, which seems too extreme.

Have you tried downsampling the particles?

I actually have not. And it helps – after the downsampling to 512 px, everything works fine.

However, I don’t know how to get back to the not downsampled particles at the e.g. ab initio reconstruction step :thinking:

UPD: doing “particle set intersect” with particle_A as larger pixel size particle helps. Thanks :slight_smile:

See; he has another response somewhere on the forum with screenshots but I forget where. Essentially you use the ‘blobs’ from the original particles to replace the ‘blobs’ from your classified particles by dragging and dropping.

1 Like

Unbinning particles using 'Downsample particles' job in CRYOSPARC (This is the right link) - If you are in the initial steps of 2D classification for cleaning, I bet you can downsample even further to gain speed. More experienced people might want to correct me, no problem with that, but IMO a pixel size of ~3 angstroms (maybe even bigger?) should still allow reliable classification. No?

1 Like

@carlos @kpahil thanks, that indeed works well.
Although I don’t know what’s the optimal pixel size is – I bet it depends on the level of detail we want to dissect from each other in our classification, but I don’t know the dependence law.