I’ve noticed that when refinement of per particle scale is switched on, the minima in the defocus error landscape during local CTF refinement are a lot shallower (and presumably less accurate?). Is this expected? And if so, is it recommended to first refine per particle defocus, then minimize over per particle scale?
Thanks for bringing this to our attention – we’ve triaged it as something to investigate, and may reply with a request for more details. For now, is this something you’ve noticed once-off, or has it been consistently the case that the defocus error minima are much shallower after scale-refinement is done?
There is definitely some interplay between per particle scale & defocus. For example, in another case, here are the per-particle scale factors before per particle defocus:
This makes me think that it’s at least plausible that scale refinement may compensate for local defocus differences (which may cause issues with identifying the correct per-particle defocus values).
Hello! Wanted to bump this to ask if there’s a recommended good practice to follow given the differences noted in local CTF estimation depending on adjustments to the per particle scale. It’s been a couple years since this discussion, but I couldn’t find any more information on this otherwise. Thank you!