I processed my data of a 80Ax110A membrane protein in detergent and identified a good subset of particles at 2.14x box size using two rounds of 2D classification followed by Ab-initio then Hetero Refinement and NU-Refine. This gave me a 3.97A map from the final NU-Refine job. I then realized that this processing was done on 1/2 F crop Patch Motion Corr. micrographs. I have not yet tried local refinement but I am considering it.
I am hoping to push the resolution down further by
using a bigger box size (2.98x) and
reprocessing the data at full resolution without f crop,
maybe local refinement on 1/2 F cropped data? or is it better to do it on non cropped data.
Is there a preferred workflow to efficiently achieve this, or do I have to Patch Motion Corr. from the start?
But then if i do rerun the Patch Motion Corr. job without F crop: How do I connect the Extract Mics job to the preferred particles. Can I skip Exposure curation since i have the particles known already?
Id appreciate your help in points 1) to 5) and also advice on what you would do if you were in my place.
So you’re not quite at Nyquist, but getting kind of close. Might be worth re-rerunning patch motion without cropping (and redoing Patch CTF), then re-extracting (either at the original pixel size, or maybe binned to 1Å/pixel for speed).
Whether and how to do local refinement will depend on the structural details - e.g. does your protein have multiple domains?
According to the most conservative standard for sampling I’ve heard espoused (2/3 Nyquist), you would unbin when the resolution is 3x the pixel size. Under that standard, it would be reasonable to unbin given the resolution is < 4.5 A. IMO a better approach is to look at your FSC curve - if you want to have a good estimate of the resolution, you need to choose the pixel size such that the FSC curve has room to decay to zero.
For the pixel size you have I would not downsample during motion correction, but I would use a nice non-integer factor during particle extraction. For example with a pixel size of 0.83, I use typically 4x downsample for 2D classification, junk rejection and a particle centering refinement, then transition to 1.5x downsample (e.g. 432->288px). The resulting Nyquist of ~2.5 allows you to do all the 3D classification you need and the small box makes memory intensive operations like 3DVA or 3DFlex easier.
BTW, you can skip directly to reference-based motion correction if you like. However, you lose the ability to say what was responsible for any resolution improvement. So the final resolution will be the same, but you cannot say “X resolution was gained by polishing,” because we don’t know what the unbinned unpolished res would have been.