Views of particle can be assigned to different conformations in heterogenous refinment

Dear Cryosparc community,

I have an interesting problem arising from hetergeous refinement.
It’s membrane protein, with extracellular bits.
I have a datset contaning particles belong to two conformation. The two conformation’s top view is very different but the side views is very similar to each other, some side view viewing directions have a clear difference.
I suffer from some orentation bias with limited number of side view particles, but enough to give a anisotropic map.

Depending on which intitial model I provide, I get the many side views particles assigned to different conformations in heterogenous refinment.
If I input conforamtion 2x A map as intial, it seperates conformation A and B, and A conformation has most side views. Non uniform refines to 3A
If I input conforamtion 2x B as intial, it seperates conformation A and B, and B conformation has most side views. Non uniform refines to 3.2A.
If I input conformation 1xA 1xB as intial, it sepetares A and B, and A conformation has most side views.Non uniform refines to 3A.

Maps are checked and verified to have expected features in correct location.

When I checked the two sets for duplicated particles. 2D class reviews duplicated particles are all side views.

Low pass filter is turned on at 20A.

I ran a 2D classification on set A, it’s clear that some side views clearly belongs to conformation B, but misassgiend to set A.

So, my question is, what is the algorithm behind hetergenous refinment, and why would it be quite heavily influenced by the intial model?

Unfortunately Due to NDA reason, I can’t share details of the proteins or the images, but would be happy to describe as much as I could.

Hi @jybjybjjyb! It is tricky to troubleshoot without being able to see images, but let’s see if we can improve your results regardless.

Have you tried combining all of the particles (both A and B) into a single consensus refinement (Non-Uniform Refinement would probably be best), then performing 3D Classification? If you can distinguish views by eye, then this method ought to do a good job separating the A and B particles from each other.

Thanks @rposert.
I did indeed try that, and that workflow gives the best reconstruction results afterall, yielded 3A maps for both.

However it was still interesting that with different initio model used as ab-intio for the consensus refinment. Some particles are still fluid in-between classes in different runs. These can be seen on the orientation distribution heatmap.

These particles consist of viewing directions, where by eye, it’s impossible to tell the difference between two conformations. But in theory, they are defintely not the same. But I guess probably at 3A alignments, it can be assigned to both? (I did run a 3D classification with 3A as target resolution, and saw similar behaviour)

I’m really curious to know, algorithmically speaking, would it be possible to distinguish very similarly looking particles and seperate into classes in 3D?

Interesting! I want to be sure I understand what the issue is:

  • you have a set of particles, some of which belong to class A and some of which belong to class B
  • you create a few heterogeneous refinements
    1. a hetero refine with A and A gives volumes which look like A and B. Most of the side views of the particle go to A
    2. a hetero refine with B and B gives volumes which look like A and B. Most of the side views go to B
    3. a hetero refine with A and B gives volumes which look like A and B. A has the most side views again
  • All of the maps have the expected features for A and B (i.e., they are clearly different and correct)

If that’s all correct, I have a few more questions:

When I checked the two sets for duplicated particles. 2D class reviews duplicated particles are all side views.

Which particle sets are you checking for duplicates?

I ran a 2D classification on set A, it’s clear that some side views clearly belongs to conformation B, but misassgiend to set A.

These particles consist of viewing directions, where by eye, it’s impossible to tell the difference between two conformations.

How can you tell that some of the 2D classifications from A belong to B?

And finally, to answer your question:

I’m really curious to know, algorithmically speaking, would it be possible to distinguish very similarly looking particles and seperate into classes in 3D?

If particles look very similar, they may be assigned with similar probability to the A and B classes. If you scroll up a bit in your log (you may have to select an earlier checkpoint) and find a line like this one:

 Effective number of classes per image: min 1.00 | 25-pct 1.00 | median 1.00 | 75-pct 1.10 | max 3.89 

what does it say in your case? What we’re really interested in is the median number, but all of them are sueful. These numbers tell us how many classes each particle is assigned to. If your particles are mostly ending up blurred between the A and B classes equally, we’d expect the median to be around 2.00.

If this is what’s happening, you might consider using a class probability filter job or a heterogeneous refinement with Force hard classification turned on. The class probability filter lets you keep only particles which are more certain to belong to a class, while the Force hard classification only allows particles to be assigned to one class.

However, if it is truly not possible to tell whether a side view belongs to A or B, both of these options will likely make your reconstructions worse, since you will be removing the partial contribution of side views that do not end up in the respective class.

1 Like
  • you have a set of particles, some of which belong to class A and some of which belong to class B
  • you create a few heterogeneous refinements
  1. a hetero refine with A and A gives volumes which look like A and B. Most of the side views of the particle go to A
  2. a hetero refine with B and B gives volumes which look like A and B. Most of the side views go to B
  3. a hetero refine with A and B gives volumes which look like A and B. A has the most side views again
  • All of the maps have the expected features for A and B (i.e., they are clearly different and correct)

All is correct expect from the last point. Because the dataset had orientation bias, I had limited number of side views. So, if it goes into one set, e.g. A, then set A would have a nice map and set B map becomes very anisotropic. Vice versa.

In some side views directions, side views are different, set B has an extra density comparing to set A. But it’s relatively small region comparing to the whole map.

When I independently created setA1 and setB1 using A or B as intial model.

This shows the median is close to 1. But that’s expected, aside from a few particular side view viewing directions, other views are quite different from each other.