ValueError: total size of new array must be unchanged in 2D Classification

Hi all
(Running Cryosparc 2.12.2)
We’re getting ValueError: total size of new array must be unchanged error.
In this case it fails after the third iteration:
Iteration 3

  -- Effective number of classes per image: min 6.42 | 25-pct 44.63 | median 46.88 | 75-pct 48.16 | max 49.71 

  -- Probability of best class per image: min 0.01 | 25-pct 0.02 | median 0.03 | 75-pct 0.03 | max 0.37 


  -- Probability of best class per image: min 0.01 | 25-pct 0.02 | median 0.03 | 75-pct 0.03 | max 0.37



Traceback (most recent call last):
  File "cryosparc2_compute/jobs/runcommon.py", line 1489, in run_with_except_hook
    run_old(*args, **kw)
  File "cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py", line 110, in cryosparc2_compute.engine.cuda_core.GPUThread.run
  File "cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py", line 111, in cryosparc2_compute.engine.cuda_core.GPUThread.run
  File "cryosparc2_worker/cryosparc2_compute/engine/engine.py", line 991, in cryosparc2_compute.engine.engine.process.work
  File "cryosparc2_worker/cryosparc2_compute/engine/engine.py", line 90, in cryosparc2_compute.engine.engine.EngineThread.load_image_data_gpu
  File "cryosparc2_compute/particles.py", line 107, in get_original_real_data
    return self.blob.view().copy()
  File "cryosparc2_compute/blobio/mrc.py", line 101, in view
    return self.get()
  File "cryosparc2_compute/blobio/mrc.py", line 98, in get
    data = n.fromfile(file_obj, dtype=self.dtype, count= n.prod(self.shape)).reshape(self.shape)
ValueError: total size of new array must be unchanged

I think the particles were loaded properly, and it did run through the first two iterations with no issues.
Can someone help us point to the right direction to troubleshoot this?

Hi @jafar

What job was this?
Did you use any non-default parameters?
What box size did you use to extract these particles?
Did you downsample these particles?

Hi @jafar,

Are you still having this type of issue in any projects? We have potentially narrowed down the bug to the case where micrographs were imported from multiple different directories (e.g. you ran two different import jobs and then combined the micrographs together for processing including particle extraction). Was this the case when you were seeing these errors?

Please let us know.
Thanks!

Hi @stephan and @apunjani ,

I was getting this same ‘ValueError’ when trying to run any type of job after re-extracting particles (to change the box size). Yes, like you said, the micrographs were imported from two different directories; the first particle extractions were done separately (i.e. two jobs, one for each set of micrographs), then the particles were combined for 2D and 3D classification jobs, and then re-extracted together , after this I started getting the ‘ValueError’. The problem was fixed by repeating the particle re-extraction in two separate jobs, one for each set of micrographs (coming from different directories) and then recombining the particle outputs (i.e. by connecting them as inputs into a 3D refinement job). Hope this helps narrowing down what the problem is.

Thanks @bonils for reporting this - it confirms our understanding of the issue!

Hi Everyone,

One potential cause for this error is a bug in cryoSPARC, and it has been fixed as of v2.15.0: