No way to reset Cs?

Hi,

When attempting to refine a set of particles merged from multiple refinements that included on the fly global CTF refinements, I ran into the appended error.

Basically cryosparc doesn’t know how to deal with a situation where multiple particles in the same optics group have slightly different Cs values, and so the refinement crashes once it reaches the stage where global CTF parameters are refined.

Would it be possible to either deal with this more gracefully (perhaps checking and if necessary resetting CTF params at load time?) or allow us to reset Cs within cryoSPARC?

In the standalone Global CTF job, there are options to reset tilt, trefoil and tetrafoil, but no option to reset Cs (and in any case if I input this set of particles it crashes before it does anything).

Cheers
Oli

2 Likes

Hi Oli,

Perhaps you can write a python script (cryosparcm) to change the ctf/cs_mm field in your cs files.

Best wishes,
Reza

1 Like

Bumping this - would be good for consistency to have a way to reset Cs in Global CTF (as is the case for all other higher order params) - this would be useful in cases where Cs has refined to a physically unreasonable value.

1 Like

Hi all,

In CryoSPARC v4.1 a parameter was added to Global CTF Refinement to allow resetting the spherical aberration to any custom value. (This is the same for all exposure groups, so if you need to reset the spherical aberration in each exposure group to independent values, you will have to run separate CTF Refinement jobs).

Best,
Michael

3 Likes

For jobs which are toggled to refine spherical aberration, could it also be standard (or enabled with a toggle) to reset all to a constant value at the onset of the job. Refinements of particles from varying locations frequently fail at last stages due to them having previously refined cs values. Frustrating and non-obvious.

1 Like

Hi @CryoEM2,

Just to understand exactly what’s causing the job to fail: when these CTF Refinement jobs fail, is it with an error message of the form Field ctf/cs_mm is not constant in group X with Y items, or does the failure happen somewhere else with a different error message?

Michael

It’s actually non-uniform refinement jobtype with higher-order aberration correction, when the particle inputs are from distinct non-uniform refinements with higher-order aberration correction. And yes, it is that message I think. Effectively, correcting for cs in any job makes those particles an independent stack which cannot be combined with others or another refinement of cs will fail. I know the workaround. I’m just wondering if it would be possible for the jobtype(s) to recognize the multi cs (which they already do) and then 1) error at the beginning or warn 2) set them back to 2.7 and refine from there 3) refine them from independent start points and treat them as groups. Thanks!

1 Like

Ah, I see – yes this is a good point, we’ll make a note to check exposure groups values for constancy across particles earlier on in the refinement jobs so that they don’t fail later toward the end.

1 Like

same problem, now for beamtilt/shift refinement (I intentionally did not perform Cs refinement). I cannot run refinement of two particle sets where CTF estimates are not the same. Surely instead of the error the job could proceed by 1) reset all to 0,0 when non-constant values detected 2) refine particles independently from two starting points, 3) search for this issue at the onset of the job instead of iteration 10 (when aberration correction begins)

  1. refine particles with higher order aberration correction
  2. pick other particles from same micrograph, do not run aberration correction
  3. combine these two particles in NU-refine and try to refine the higher order aberrations. error below.

Traceback (most recent call last):
File “cryosparc_master/cryosparc_compute/run.py”, line 96, in cryosparc_master.cryosparc_compute.run.main
File “cryosparc_master/cryosparc_compute/jobs/refine/newrun.py”, line 482, in cryosparc_master.cryosparc_compute.jobs.refine.newrun.run_homo_refine
File “/cryosparc/cryosparc_worker/cryosparc_compute/exposure_groups.py”, line 52, in check_particles_groups
retval = check_field_in_group(group, egid, fullname)
File “/cryosparc/cryosparc_worker/cryosparc_compute/exposure_groups.py”, line 39, in check_field_in_group
if do_assert: assert False, “Field %s is not constant in group %d with %d items, should be %s” % (fullname, egid, len(group), str(firstval))
AssertionError: Field ctf/shift_A is not constant in group 2 with 1143995 items, should be [0. 0.]

Hi @CryoEM2,

I apologize for the delayed response, and that you’re facing this issue. In case this issue is blocking your processing (and for posterity), I believe the simplest current workaround would be the following (inserted between steps 2 and 3 above):

Step 2A. Run an Exposure Group Utilities job combining both particle stacks as input, with the following:

  • in the combine&set mode, with,
  • the Combine Strategy parameter as take mode
  • the Input Selection as particle.

This step will combine the particles with different aberrations (but in the same exposure group) and set each aberration to its most common value across the whole dataset. Then, downstream refinements should be able to start without encountering the posted error.

We have noted to do this check/combination automatically at the start of refinements

Best,
Michael

1 Like

Thanks! I will give that a try, seems better than low-level CTF inputs swap.