I am trying to run 2D classification and encountering the following error. Has anyone had a similar problem.
Using random seed of 558330554
Loading a ParticleStack with 1553134 items...
SSD cache : cache successfuly synced in_use
SSD cache : cache successfuly synced, found 539185.52MB of files on SSD.
SSD cache : cache successfuly requested to check 2533 files.
SSD cache : cache requires 319.04MB more on the SSD for files to be downloaded.
SSD cache : cache has enough available space.
Transferring J37/imported/17sep06c_G9-3_00039gr_00002sq_v02_00005hl4_00004ed-a-DW_particle.mrcs (155MB)
Complete : 164MB
Total : 412619MB
Speed : 500.85MB/s
SSD cache : complete, all requested files are available on SSD.
Using 100 classes.
Computing 2D class averages:
Volume Size: 128 (voxel size 2.37A)
Zeropadded Volume Size: 256
Data Size: 264 (pixel size 1.15A)
Using Resolution: 6.00A (50.0 radius)
Windowing only corners of 2D classes at each iteration.
Using random seed for initialization of 1121242163
Done in 0.819s.
Start of Iteration 0
-- DEV 0 THR 0 NUM 500 TOTAL 8.5584180 ELAPSED 9.4281439 --
Traceback (most recent call last):
File "cryosparc2_compute/jobs/runcommon.py", line 738, in run_with_except_hook
File "cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py", line 92, in cryosparc2_compute.engine.cuda_core.GPUThread.run
File "cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py", line 93, in cryosparc2_compute.engine.cuda_core.GPUThread.run
File "cryosparc2_worker/cryosparc2_compute/engine/engine.py", line 980, in cryosparc2_compute.engine.engine.process.work
File "cryosparc2_worker/cryosparc2_compute/engine/engine.py", line 88, in cryosparc2_compute.engine.engine.EngineThread.load_image_data_gpu
File "cryosparc2_compute/particles.py", line 109, in get_original_real_data
return self.blob.view().copy() # TODO!!!! All the particle code assumes x is slow axis! WRONG!
File "cryosparc2_compute/blobio/mrc.py", line 101, in view
File "cryosparc2_compute/blobio/mrc.py", line 98, in get
data = n.fromfile(file_obj, dtype=self.dtype, count= n.prod(self.shape)).reshape(self.shape)
ValueError: total size of new array must be unchanged
Thank you for your response.
My particles came from collaborators / from cryosparc (presumably v1).
No error is given when I import particle stacks into cryosparc v2. I examined box size for each stack and did not find any problems, otherwise I do not know how else I can quickly check these stacks.
You are right, sorry. It must have been an extract from micrographs job. I think the blob from the Inspect Particle Picks or Template Picker job was causing problems but using the outputs from the CTF estimation job fixed the issue. I can’t go back and check since I had to start over with a fresh database since then.
Are you still having this type of issue in any projects? We have potentially narrowed down the bug to the case where micrographs were imported from multiple different directories (e.g. you ran two different import jobs and then combined the micrographs together for processing including particle extraction). Was this the case when you were seeing these errors?
I have not seen the error recently. My micrographs were imported from multiple directories, however I can’t say whether it was using a wildcard in the file name field or in fact from two separate imports as you say.
I am getting this error when doing a 2D classification on WARP output. The difference between the import for this project, which gives this error, and other projects in which I did not give this error is that in this project the WARP output (goodparticles_ect.star file and particles folder) where copied from an external drive onto a server and the server files/directory was sourced for import. In the past when I didn’t have this error I had sourced the .star and particles right from the folder of the WARP output.
Given the scenario the most likely thing that happened is that the .mrc file got corrupted or truncated during the transfer, and therefore it is not the correct length compared to the list of particles in the star file.
Is there a way you can check the header and length of the .mrc files, or compare against the originals in the WARP output dir?