Memory load for Local CTF Refinement?

Hi CryoSPARC team,

Another one, sorry! :blush:

Just had local CTF refinement crash with a curious error:

[CPU: 46.50 GB Avail: 623.80 GB] Starting Local CTF Refinement =====================
[CPU: 46.50 GB Avail: 623.80 GB] --------------------------------------
[CPU: 46.50 GB Avail: 623.80 GB] Fitting parameters..
[CPU: 46.50 GB Avail: 623.80 GB] Fit min resolution: 20.000A
[CPU: 46.50 GB Avail: 623.80 GB] Fit max resolution: 6.676A
[CPU: 46.51 GB Avail: 623.79 GB] GPU batch size auto-fit is 1000
[CPU: 46.51 GB Avail: 623.79 GB] Starting particle processing for split A..
[CPU: 51.59 GB Avail: 618.81 GB] batch 1 of 21
[CPU: 16.01 GB Avail: 654.94 GB] Traceback (most recent call last):
File "cryosparc_master/cryosparc_compute/", line 95, in
File "cryosparc_master/cryosparc_compute/jobs/ctf_refinement/", line 153, in
File "cryosparc_master/cryosparc_compute/jobs/ctf_refinement/", line 186, in
File "cryosparc_master/cryosparc_compute/jobs/ctf_refinement/", line 356, in
File "cryosparc_master/cryosparc_compute/engine/", line 1138, in cryosparc_master.cryosparc_compute.engine.newengine.EngineThread.project_model AssertionError: more than 6GB of slices

Never seen this one before. System has 1.5TB of RAM, 2x A6000 GPUs (48GB VRAM).

Due to needing to downsample to fit the box within the current box size limits, 6.15A is the highest I can reconstruct this to (it goes to ~4A in RELION). GPU memory usage was ~15GB just before it crashed.

Hey @rbs_sci – no worries, thanks for reporting! Could you give us a bit more info here:

  • What was the box size used?
  • Were there any custom parameters set in the job?


Hi @vperetroukhin,

Thanks for responding.

Box size: 1,100 pixels

Custom parameters:
Defocus search range: 3000 (particle is 2500A, so 2000 results in some extremely odd estimated defoci)
Account for EWS curvature: Enabled

Without EWS correction, it gets stuck at about 7.5-8A. When another job has finished I’ll try it with EWS disabled.

Hey @rbs_sci –

After some internal testing, we have one potential solution: can you try running the same job but set the parameter GPU batch size of images to 10 and see if the job finishes?


Hi @vperetroukhin, I’ll check as soon as the current job has finished. :slight_smile: