I am wondering about the best steps that one would go through for the processing of an icosahedral virus dataset, as I am no so familiar with highly symmetric complexes.
Rightnow, I followed my classical workflow and managed to reach 2.3Angs resolution from 83k particles by applying symmetry, my Nyquist being at 1.7. The map looks good, resolution might be slightly overestimated though, but definitely looks like a 2.5ish map. I would say that I am happy with that, but there is probably a way to push the analysis further, right?
From there I am a bit lost. What I tried next is symmetry expansion, which yielded 5.2M particles.
First I created a mask that would cover a single subunit and launched a Focused Refinement. Resolution improve and went to 1.8Angs, but the map actually looked like a 4ish map, so way worse.
So now, I am a running a 3D classification to see if I can pull out the best particles, on which I would then rerun focused refinement. There should actually be almost no variability in that sample, so I am not even sure this would help. Do you also have a tip for the best parameters to use for 3D classification?
So the question is, should I be happy with my 2.3Angs and stop there, or would symmetry expansion help pushing the resolution a bit more?
I’m not sure exactly what problem you are trying to solve, but I just wanted to point out that occasionally, if your FSC curve seems inflated and/or doesn’t go to zero, that can be from the presence of duplicate particles. I wonder if that’s related to your symmetry expansion in any way.
Also, I’d imaging that symmetry expanding, and refining a single vertex on the viral surface would be identical to refining the whole virus with icosahedral symmetry applied. Symmetry expansion seems to be a good tool for classification of a unique viral vertex, however, if you have one on your specimen, at which point you’d probably want to run that focused refinement on the unique vertex.
One current market inefficiency that might be worth exploration is considering the size of the virion on the per-particle defocus estimation. Depending on the size of the virus, an asymmetric unit on one size could be hundreds of nanometers away from an asymmetric unit on the other side. Not sure how to correct for this, though, let alone in cryoSPARC.
You don’t actually need the custom RELION version to do it, but doing it manually is a little more fussy.
I’ve tried the basic methodology in cryoSPARC briefly, but without much success. I feel I’m still not familiar enough with cryoSPARC to get the best results for something exotic like that. I want to experiment more when I have more time.
You won’t see much of an improvement unless your particles are >100 nm (and the defocus gradient is large). Ewald sphere correction is simpler to use for smaller objects, but currently doesn’t quite get the best results for really big particles (>200 nm) where a combination works very well.
Also, prepare for processing to get insanely slow if you’re doing block-based reconstruction using a (symmetry expanded) 5.2M particles… I’ve seen RELION converge angles down to 0.02 degrees…
It indeed looks like what I am doing. So I guess there’s no point doing this in my case, especially since my particles are 30nm in size.
However, I understand that if I have something bound to my virus (e.g FABs) and coverage is not 100%, symmetry expansion and 3D classification would be the way to go to retrieve only the subunits with the interactant bound and improve resolution of the interaction. So possibly something I would want to use in the future.
That’s correct. If your “symmetric” particle is asymmetric in reality, expansion and classification can tease out specific conformations. Although what works best can often be a case of experimentation. Also if a capsid has some flexibility making it only pseudo-symmetric, it can help compensate for that.
30 nm is big enough to see improvements with Ewald sphere correction at ~2 Angstrom, so it might be worth trying.
Ok, just to finish the discussion.
I tried EWALD sphere correction and I gained a little bit over 0.1A resolution (correlated with map quality improvement), so I can confirm that this approach worked in my case.