We are reconstructing the single particle cryoEM structure of a coronavirus spike protein in detergent. The homogenously refined dataset has resolution of ~3.2A (GSFSC). When we perform heterogeneous refinement using three or four ab initio models, we get two classes that are similar looking in terms of conformations. However, if we use previously published volumes of three distinct spike conformations as the initial models in heterogenous refinement, we end up with reconstructed volumes that are very similar to these three input models with distinct conformations. Are we introducing a model bias here by using these previously published volumes (albeit low-pass filtered by default to 20A), or is our heterogeneous refinement using the truly ab initio models missing these conformations? Any advice would be appreciated. Thanks.
both. you can test internally by taking the particles from your new “biased” conformation and seeing if 2D and ab initio of those particles are correctly new conformation. Alternatively, use single ab initio class of all particles, run refinement which hopefully aligns all particles well to the core, then use 3D classification with many classes (20-100) and let it sort out all of the populations and conformations for you. It obviously does not work perfectly, and all things cryoEM are done iteratively, taking results from one job and using the knowledge/outputs to make decisions for next jobs. But it should help establish the landscape of all kinds of conformations and assemblies happening with no reference bias (no reference at all)
Thank you. This is extremely helpful.