Dear community,
since there are quite a few tool in cryosparc that aim at tackling/resolving heterogeneity and flexibility of macromolecular complexes I would be grateful if someone could briefly summarise major differences between them, when typically they should be used and how to make the most out of them. I refer to the following jobs:
3D variability describes motions of the protein, starting with the most prominent as first principle component. Can be used to make *simple movies so you can SEE the motion(s) (volume series chimera), or *cluster them into classes which are at the same endpoints of motion. 2 job types. First one analyzes data, second one is much quicker and allows to interpret that data several ways.
3D class - requires a clean dataset, high resolution refinement with proper angular assignment (particles are correctly giving high res reconstruction). Then, without changing any particle assignments, it just classifies them based on likeness. I love this job, very effective if you have lots of particles. Aim for 30k per class, and choose 8Å filter resolution so it focuses on large motions. Will have to refine the particles in classes of interest. “output results after each F-EM” is very useful since the job can take days to converge. It could be the fastest way to high-resolution with best completeness of the whole particle, but also trashes tons of great data which just don’t happen to get classified together in the right spot. Also a great way to analyze trends. 3million particles, 100 classes, tile them in chimera, see all the kinds of things going on.
het refinement requires volume inputs - references. you can use this to force reference bias, and sort out the big classes you KNOW are in your data. +/- subunits, known conformational variety, “good” vs “bad” particles. It has strong bias, though, needing to fill each class with many particles, so the references dictate the outcome.
local refinement is hard to get to work. Masks identity is critical. Only works on high-resolution 4Ă… at least data that is already very well aligned and very close to the correct angle. And at that point can finalize and perfect assignment of a region in a mask.
CTF refinement during NU-refinement is great, but only use this option when the particle set is high quality and at least 100k. Bad data will degrade the parameters. best to do this on the first high-res refinement of best particles before all the classification.
–these were one guy’s opinions–
Read and actively search this discussion board daily. TONS of tips and tricks from beginners to experts.