Parameters for high resolution

I’m working with a 130 × 165 Å bacterial protein dataset in cryoSPARC, and so far my refinements stall at around 2.9 Å. I tried box sizes of 300 and 324 pixels, but neither went below 2.9 Å. My target is about 2.7 Å. I’m new to this (an undergraduate doing a university project), so any tips are welcome. Specifically: Is it true that a smaller box can sometimes help resolution, or should I try going larger? Should I keep more particles or pick only the best ones (e.g., the tightest class averages)?

Thank you in advance for any suggestions! Let me know if there are anything I should provide you with.

Hello seah0210,

Unfortunately proteins don’t care about our target resolutions… many aspects of your data (from the protein intrinsic nature to the data processing, passing by quality of freezing, data collection…) will impose that limit.

On box size, I’ve seen both situations, a few where a tight box improved resolution, and also the opposite. That is something to be optimized according to the situation - doable with a small subset of particles.

About particle selection, depending on the final numbers yes, you can select only the best-looking classes, just keep in mind that this usually means you are strongly biasing the final particle set - always check the orientations plots after refinement.

If you haven’t already, take a look at this trend: https://discuss.cryosparc.com/t/cryosparc-workflow-discussion-comparing-different-processing-orders/16320/3

Luck

2 Likes

Hi there,

Box Size: It’s a bit of a challenge to say for certain which box size would be most applicable and useful for your situation. There are a few ways to calculate the box size you “should use” but in practice an ideal box size can sometimes only be determined empirically. I have seen examples where a box size was three times larger than it “should be” for an appropriate reconstruction, and other times where I’ve used one box size during the first half of processing and then stepped it down to another size for refinements.

If I was in your position I would set up a series of jobs with extraction box sizes ranging from say 256 to 384 and running refinements on the resulting particles. You can delete what doesn’t work and use float16 to save space.

Particles: A clean particle stack is always desired, but you can hit a wall with what your are able to “clean”. Using heterogeneous refinement with decoys or iterative rounds of ab-initios while increasing starting and final resolution and help clean up particle stacks, but as @carlos says aspects of your data (that may be out of your control) will impose a limit on what you can achieve with processing.

1 Like