2D classification: ValueError: cannot reshape array of size 0 into shape (300,300)

Dear cryoSPARC community.

I am trying to run an 2D classification of externally extracted particles. The stack contains almost 900.000 particles of 300*300 dimension. When I try a classification of the imported particle stack, I receive the following error:

[CPU: 5.01 GB]   Traceback (most recent call last):
  File "/user/cryosparcembl/cryosparc2_worker/cryosparc_compute/jobs/runcommon.py", line 1726, in run_with_except_hook
    run_old(*args, **kw)
  File "cryosparc_worker/cryosparc_compute/engine/cuda_core.py", line 130, in cryosparc_compute.engine.cuda_core.GPUThread.run
  File "cryosparc_worker/cryosparc_compute/engine/cuda_core.py", line 131, in cryosparc_compute.engine.cuda_core.GPUThread.run
  File "cryosparc_worker/cryosparc_compute/engine/engine.py", line 1027, in cryosparc_compute.engine.engine.process.work
  File "cryosparc_worker/cryosparc_compute/engine/engine.py", line 87, in cryosparc_compute.engine.engine.EngineThread.load_image_data_gpu
  File "/user/cryosparcembl/cryosparc2_worker/cryosparc_compute/particles.py", line 113, in get_original_real_data
    return self.blob.view().copy()
  File "/user/cryosparcembl/cryosparc2_worker/cryosparc_compute/blobio/mrc.py", line 124, in view
    return self.get()
  File "/user/cryosparcembl/cryosparc2_worker/cryosparc_compute/blobio/mrc.py", line 121, in get
    data = n.fromfile(file_obj, dtype=self.dtype, count= n.prod(self.shape)).reshape(self.shape)
ValueError: cannot reshape array of size 0 into shape (300,300)

I splitted the dataset in 3 fraction of 300000 particles within cryoSPARC and for the first two fractions the classifications runs, for the last it still fails with the same error. Also I tried downsampling but this process fails with the same problem.

Can someone explain what might be wrong? Is there a problem with the imported file, maybe a specific image within the stack.

Thanks for you help and kind greetings.

Mica

Hi @mica84,
This error indicates that the particle stack file on disk does not contain the same number of particles as the .star file that you imported. The jobs can’t read the particle images that are indexed by the imported .star file because the stack file is too short and so reading returns zero bytes instead of 300x300 float values.
Can you check the imported star file vs the stack size?