Hi, I’m seeing what appear to be bogus corrupt particle warnings during NU-refine with 4.5.1, e.g.:
Traceback (most recent call last):
File "/home/exx/cryosparc/cryosparc_worker/cryosparc_compute/jobs/runcommon.py", line 2294, in run_with_except_hook
run_old(*args, **kw)
File "cryosparc_master/cryosparc_compute/gpu/gpucore.py", line 134, in cryosparc_master.cryosparc_compute.gpu.gpucore.GPUThread.run
File "cryosparc_master/cryosparc_compute/gpu/gpucore.py", line 135, in cryosparc_master.cryosparc_compute.gpu.gpucore.GPUThread.run
File "cryosparc_master/cryosparc_compute/engine/engine.py", line 1136, in cryosparc_master.cryosparc_compute.engine.engine.process.work
File "cryosparc_master/cryosparc_compute/engine/engine.py", line 421, in cryosparc_master.cryosparc_compute.engine.engine.EngineThread.compute_error
ValueError: Detected NaN values in engine.compute_error. 33976503 NaNs in total, 209 particles with NaNs.
When I use check corrupt particles on the input stack, no NaN containing particles are detected.
This only seems to happen on large particles ~>900px, and the same particle stack when downsampled to a smaller boxsize does not give any issues. When I try to run homogeneous reconstruction on the same stack, it doesn’t give any warnings, but the FSC & map slices look weird:
This looks similar to another bug we’re trying to track down related to per-particle minimization. Do these failing jobs have pp scale minimization on? If so, could you try to re-run them with it off and see if they complete without issue?
@olibclarke – a couple more questions/requests when you get the chance! Could you please try:
Randomly subsampling the set of 900px particles to a (much) smaller set (maybe ~5K particles) and re-running the homogeneous reconstruction on this set.
Using a fixed mask (not from the same upstream NU-Refine) within homogeneous reconstruction.
Let us know if these two changes produce different results.
Hi @vperetroukhin, just to clarify, this initial error was obtained from NU-refine, not homogeneous reconstruction - do these suggestions still pertain to this case? I will try a smaller subset for homogeneous reconstruction and see if the empty map slices go away
Understood, yep. Based on the reconstruction plots you shared originally, we’re now suspicious that perhaps the half-maps are fine but mask generation is somehow to blame (since the no mask/spherical FSCs are not NaNs). Getting at this via homo reconstruct is just faster, which is why I asked for that first.
I think you might be right that mask generation is an issue - the FSC mask generated during homogeneous reconstruction visually looks ok, but looking at the header it has some quite negative values:
Has this been resolved? I have similar issue with v.4.6.0 when doing local refinement. My box size is much smaller (384px) but I enabled minimize over particle scales.
Traceback (most recent call last):
File "/home/klin_csparc/Cryosparc3/cryosparc_worker/cryosparc_compute/jobs/runcommon.py", line 2304, in run_with_except_hook
run_old(*args, **kw)
File "/home/klin_csparc/Cryosparc3/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "cryosparc_master/cryosparc_compute/engine/newengine.py", line 2730, in cryosparc_master.cryosparc_compute.engine.newengine.process.work
File "cryosparc_master/cryosparc_compute/engine/newengine.py", line 2809, in cryosparc_master.cryosparc_compute.engine.newengine.process.work
File "cryosparc_master/cryosparc_compute/engine/newengine.py", line 1534, in cryosparc_master.cryosparc_compute.engine.newengine.EngineThread.compute_error
ValueError: Detected NaN values in newengine.compute_error. 3314 NaNs in total, 1 particles with NaNs.