Live vs "normal" Cryosparc 2D classes?

Hi @team,

How (if at all) does the algorithm in the “streaming 2D” job differ from the one used in regular 2D classification in CryoSPARC?

In our hands, the “normal” 2D classification gives dramatically better results in certain cases, particularly for small membrane proteins, even when the relevant 2D classification parameters and number of particles and classes are identical.

It also seems like it ignores certain custom parameters - for example, when I set the maximum alignment resolution to 20 Å in Live, the results are not what I expect - it looks the same as without, where as in regular 2D we see much smoother classes, which can be useful in certain cases.

Cheers, Oli

Hey @olibclarke,

When streaming 2D class starts, it takes in a maximum of (batch size per class * number of classes * 10) particles and runs a normal 2D class job with the given params (number of O-EM/full iterations, classification uncertainty factor, noise model annealing, etc) to initialize the classes. Then for the rest, it runs a full iteration with all the particles every time at least (batch size per class * number of classes * 2) particles are streamed in.

So if there were only a few particles extracted when streaming 2D class starts, the classes and noise model becomes really over-fit - normal 2D class would have seen (batch size per class * number of classes * number of O-EM iterations) particles as it annealed the noise model, while streaming 2dclass fully anneals the noise model over only the initial particles. Also, instead of running one big full iteration with the rest of the particles, streaming 2D class runs many progressively larger full iterations as particles are streamed in.

We’ll change the 10 in the max particles for initialization to number of O-EM iterations, so that if you (re)start streaming 2D class with enough particles it should behave the same way as normal 2D class for the O-EM iterations. Also streaming 2D class does currently ignore max alignment resolution (as well as use white noise model and skip CTF correction) and we’ll implement those params as well :slight_smile: