Total size of new array must be unchanged

open

#1

Hi,

I am trying to run 2D classification and encountering the following error. Has anyone had a similar problem.

Thanks,
Elena

Using random seed of 558330554

Loading a ParticleStack with 1553134 items…

SSD cache : cache successfuly synced in_use

SSD cache : cache successfuly synced, found 539185.52MB of files on SSD.

SSD cache : cache successfuly requested to check 2533 files.

SSD cache : cache requires 319.04MB more on the SSD for files to be downloaded.

SSD cache : cache has enough available space.

Transferring J37/imported/17sep06c_G9-3_00039gr_00002sq_v02_00005hl4_00004ed-a-DW_particle.mrcs (155MB)
Complete : 164MB
Total : 412619MB
Speed : 500.85MB/s

SSD cache : complete, all requested files are available on SSD.

Done.

Windowing particles

Done.

Using 100 classes.

Computing 2D class averages:

Volume Size: 128 (voxel size 2.37A)

Zeropadded Volume Size: 256

Data Size: 264 (pixel size 1.15A)

Using Resolution: 6.00A (50.0 radius)

Windowing only corners of 2D classes at each iteration.

Using random seed for initialization of 1121242163

Done in 0.819s.

Start of Iteration 0

– DEV 0 THR 0 NUM 500 TOTAL 8.5584180 ELAPSED 9.4281439 –

Traceback (most recent call last):
File “cryosparc2_compute/jobs/runcommon.py”, line 738, in run_with_except_hook
run_old(*args, **kw)
File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 92, in cryosparc2_compute.engine.cuda_core.GPUThread.run
File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 93, in cryosparc2_compute.engine.cuda_core.GPUThread.run
File “cryosparc2_worker/cryosparc2_compute/engine/engine.py”, line 980, in cryosparc2_compute.engine.engine.process.work
File “cryosparc2_worker/cryosparc2_compute/engine/engine.py”, line 88, in cryosparc2_compute.engine.engine.EngineThread.load_image_data_gpu
File “cryosparc2_compute/particles.py”, line 109, in get_original_real_data
return self.blob.view().copy() # TODO!!! All the particle code assumes x is slow axis! WRONG!
File “cryosparc2_compute/blobio/mrc.py”, line 101, in view
return self.get()
File “cryosparc2_compute/blobio/mrc.py”, line 98, in get
data = n.fromfile(file_obj, dtype=self.dtype, count= n.prod(self.shape)).reshape(self.shape)
ValueError: total size of new array must be unchanged


#2

Hi Elena,

This error may indicate that there is something unexpected with the .mrc file containing the raw particle images. Where did your particles come from? Were they outputs of a different job?

Best,
Ali H.


#3

Hi Ali,

Thank you for your response.
My particles came from collaborators / from cryosparc (presumably v1).
No error is given when I import particle stacks into cryosparc v2. I examined box size for each stack and did not find any problems, otherwise I do not know how else I can quickly check these stacks.

Many thanks,
Elena


#4

Has anyone been able to resolve this? I am getting the same error with 2D classification.

Sometimes the error occurs on iteration 3, sometimes it gets all the way to 10 iterations, sometimes it fails right away. Therefore I don’t know how it could be an issue with the raw input files.

Thanks!


#5

I was having the same issue today on v2.11 and was able to resolve it by using the exposures blob from the Patch CTF Esitmation job rather than the exposures from the Extract from Micrographs job.