The default number for O-EM batch size (per class) is 1000, and I tested 10, 50, and 100 while other settings remain default and all result in the same error message. I think this bug is relevant to the changes in the 3D classification job in the latest version.
This parameter is quite crucial for my current study, so sincerely wish it can be fixed soon!
Traceback (most recent call last):
File “cryosparc_master/cryosparc_compute/run.py”, line 96, in cryosparc_compute.run.main
File “cryosparc_master/cryosparc_compute/jobs/class3D/run.py”, line 753, in cryosparc_compute.jobs.class3D.run.run_class_3D
TypeError: unsupported operand type(s) for /: ‘NoneType’ and ‘int’
Thanks for reporting this! We’ve been able to reproduce this and identify the root cause: we introduced some CUDA-related caching in CryoSPARC 4.2 to address a bug with CUDA 11.8. As part of this, we also set a fixed threshold in class3D that computes noise on classes of 100 particles or more. If no classes have at least 100 particles, the noise estimation will fail. We will fix this in an upcoming update – in the mean time you should be able to use O-EM per-class batch sizes of 100 or more (and perhaps a bit smaller if you get lucky and there are no ‘uniform’ mini-batches).
Can you confirm that setting the per-class batch size to 100 still produces this error?
Finally – I’m curious if you could tell us a bit more about the type of targets for which you’ve found such small batch sizes useful. Thanks!
Thank you so much for solving the problem! It can work at 100 per-class batch size, but I am wondering if you can allow as small as 10?
I am using I1 symmetry expansion and particle subtraction to focus on a small area outside of viral capsid, and using 3D classification to identify the position and gesture of the potential 50kD molecule. Before upgrading to v4.2.1, using as small as 10 per-class batch size was giving me classes with significantly different numbers of particles and traces of evidence of the binding molecule. However, using as large as 100 per-class batch size now results in particles evenly distributed into n classes and very similar volumes.
Do you have any suggestions on this case? How do I make the classification more efficient?
Ah fascinating! Yes – we will update the logic to support < 100 class sizes! One note: have you tried running 3D class on non-subtracted particles but using a focus mask? With a focus mask supplied, our 3D class masking procedure effectively replicates particle subtraction, but may work slightly better if your subtraction is not well aligned.
You can refer to this thread for some discussion on this: