I want to run particle subtraction twice on the same set of particles with two different masks, then combine those particles in ab initio for further refinement. I’m looking for heterogeneity within a symmetric unit. When I combine the particles in ab initio, the particle count remains the same as the original refinement, instead of 2x, so it’s not including both sets of particles. Is it possible to run homogeneous refinement, signal subtract twice (or more) and combine the subtracted particles for further refinement?
Just to confirm: you want to run subtraction on the stack twice, so that at the end you have a particle stack twice as large with different regions subtracted, and then refine this stack? Particle subtraction doesn’t re-write the UIDs of the particles so if you try to combine the two datasets downstream, you would just find that the
blob result is overwritten by one of the subtraction jobs, which agrees with what you’re seeing.
This workflow isn’t easily supported in cryoSPARC because you now have two copies of each particles with the same noise present, so refinement needs to know about that to ensure that unique particles are split into the same half-set. Would it satisfy your needs to instead just split the particles stack into two batches, run signal subtraction on one batch with the first mask, run a separate signal subtraction on the second batch with the second mask, and then combine these for downstream processing?
Your description of my problem is accurate. Splitting the particles is not ideal, because I would like to have more particles to get a better resolution and also to look at heterogeneity.
I may have to use pyem with subparticles.py unless there is an easier way to re-extract with re-centering on a substructure.
Currently there isn’t a way to arbitrarily re-center / re-extract on any sub-domain, but it has been requested a few times and so we’re aiming to add a utility that can do this in a future release!
Cool. Thanks for the info.