I work in a microscopy core and have multiple groups that want to quantify the number of collapsed and non-collapsed capsids (adeno associated virus, AAV) in negative stained preparations. They may eventually compare the data of negative stained TEM to cryo-TEM to see how the collapse observed by negative staining preps relates to the number of full vs empty capsids.
I’ve gone through particle picking and 2D classification for a small dataset, and it is obvious there are collapsed and non-collapsed capsid classes (see attached png). Would it be valid to use the # of particles in each class as a quantifiable measure of collapsed and non-collapsed capsids? For example, it looks like the 3rd class from the left are non-collapsed and contain 3433 particles whereas the 1st, 2nd, and 4th classes have some degree of collapse. All these classes have an ECA score of 1.0, which I
believe means the program is pretty confident of each particle belonging to only 1 class. Would it be accurate to say that for this population of particles 15% (3433 particles/total particles in classes 1-4) are non-collapsed? Or is the 2D classes in CryoSPARC not amenable to such an interpretation.
A colleague of mine was doing this sort of thing in cryo a couple of years ago. In the end, after a lot of experimenting, we concluded that while 2D classification is a useful first step, for any degree of confidence manually checking each particle was required as often junk would be hiding in the “good” classes still. In RELION this wasn’t hard, but CryoSPARC doesn’t expose a “view particle stack” function with the same ease.
The trustworthiness of assignment to a specific class is too dependent on parameters used to really state with any significant degree of certainty without manually examining each particle. In the end, we always found some empty capsids mixed in with full, and vice versa, along with some junk scattered throughout. This was repeatable with two other samples (not AAV, but still viruses) where with one, a PhD student spent quite a lot of time manually sorting through different states of viral assembly.
So while I’d use it as a guide, I’d not trust it without a manual check.
You mean as an effective equivalent to relion_displaying the extracted particles?
You raise a good point. I believe I’ve tried this in the past (not for full/empty capsids, though) but I find looking at a table of particles easier for a few reasons: I don’t have to switch between micrographs (just scroll down with the occasional click here and there); junk really jumps out at me when particles are in a big array; and I don’t get distracted by particle environment. Another thing is that I find the locator circles/boxes sometimes useful and sometimes distracting, so I tend to toggle them on and off a few times, but others might decide one is better than the other.
That said, I’m sure a similar thing could be done with cryosparc-tools, and csparc2star in PyEM for the stack could be used to pass it to relion_display, but neither of those makes particle-by-particle selection for future processing quite as smooth as the RELION pipeline (for me at least).
It’s also quite nice to be able to set alignments in the display view in RELION, although I think it makes loading the stack a lot slower?
Yes, you would export the particles, display them with or without alignments in Relion, and then re-import the selected particles with the appropriate micrograph links. So just the two extra steps. I don’t believe inverting the coordinates an extra time (or using cryosparc-tools) will be necessary.
Aye. Sorry, could have made that sentence clearer. I meant either csparc2star could be used, or cryosparc-tools (if more comfortable with Python than I am, and/or wanting a Jupyter notebook visualisation…)