Mrc_datatype_to_dtype error

Hi Developers,

I am trying to carryout a NU-refinement with a per-particle scale on the particles from a previous NU-refinement job. I am running into the following error:

#############
Traceback (most recent call last):
File “cryosparc_master/cryosparc_compute/run.py”, line 96, in cryosparc_compute.run.main
File “cryosparc_master/cryosparc_compute/jobs/refine/newrun.py”, line 125, in cryosparc_compute.jobs.refine.newrun.run_homo_refine
File “/media/raid/cryosparc/cryosparc_worker/cryosparc_compute/particles.py”, line 126, in read_blobs
dtypes[rel_path] = mrc.mrc_datatype_to_dtype(hdr[‘datatype’])
File “/media/raid/cryosparc/cryosparc_worker/cryosparc_compute/blobio/mrc.py”, line 65, in mrc_datatype_to_dtype
assert False,‘Unsupported MRC datatype: {0}’.format(datatype)
AssertionError: Unsupported MRC datatype: 857743392
#############

I am not caching the particles. I tried same set of particles from a different job. I am running into the same error.

Please suggest.

Thanks

Abhilash.

@Abhilash Which job types were used for

  • motion correction
  • extraction of the input particles
    ?

What happens when you clone the

and run the cloned job?

@wtempel Motion correction and extraction were carried out in Relion, the particles were imported into Cryosparc.

I have to mention, I have used the particle set to run several different jobs in Cryosparc. I also ran a NU refinement with the particle set. I started running into the error when I tried to run a second NU-refinement. Cloning the job does not help either.

Thanks.

To confirm: You cloned the NU-refinement job that previously ran successfully with the particles in question, and the clone does not run?
For the clone of the previously successful NU-refinement job, do you now get the same
Unsupported MRC datatype: 857743392 error?
What

  • is the current CryoSPARC version?
  • was the CryoSPARC version of the “previous”, successful NU-refinement job?

What were the successful job types and CryoSPARC version?

@wtempel
Yes, I cloned the NU-refinement job and it doesn’t run now. All the jobs were run in the same workspace of Cryosparc v4.2.1.

I was able to successfully run Homogeneous refinement, NU refinement and local refinement previously with the same set of particles.

Thanks

Are the actual mrc files to which the symbolic links inside the imports/ subdirectory of the Import Particles job directory refer still available on the system?

cd /path/to/particle_import_jobdir/imported/
stat $(ls *.mrc* | tail -n 1)
stat -L $(ls *.mrc* | tail -n 1)

Hi @wtempel

Yes, I think the actual files are still available. Below is the output for your commands

=> stat $(ls .mrc | tail -n 1)
File: 018446612927632845412_mouse_mc_10194_Mar30.mrcs → /group/jalettsgrp/EM_processing/apadavannil/03270425_AP_MouseMC/Extract/job173/Micrographs/0327-Data/mouse_mc_10194_Mar30.mrcs
Size: 126 Blocks: 1 IO Block: 1048576 symbolic link
Device: 4dh/77d Inode: 4263488 Links: 1
Access: (0777/lrwxrwxrwx) Uid: (4000000010/cryosparc-jalettsgrp-user) Gid: (101523255/jalettsgrp)
Access: 2023-06-11 10:15:13.567892837 -0700
Modify: 2023-06-11 10:15:13.567892837 -0700
Change: 2023-06-11 10:15:13.567892837 -0700
Birth: -

=> stat -L $(ls .mrc | tail -n 1)
File: 018446612927632845412_mouse_mc_10194_Mar30.mrcs
Size: 7373824 Blocks: 14439 IO Block: 1048576 regular file
Device: 4dh/77d Inode: 4197189 Links: 1
Access: (0664/-rw-rw-r–) Uid: (1680718/apadavan) Gid: (101523255/jalettsgrp)
Access: 2023-06-10 22:33:02.648294601 -0700
Modify: 2023-06-10 22:33:04.616311607 -0700
Change: 2023-06-10 22:33:04.616311607 -0700
Birth: -

Please suggest.

Thanks

Abhilash.

You could connect the Import Particles output to a Check For Corrupt Particles job, with Check for NaN values enabled…

Hi @wtempel

I ran the Check for corrupt particles job as suggested. The job crashed with the following error:

#########

Traceback (most recent call last):? File
“cryosparc_master/cryosparc_compute/run.py”, line 96, in
cryosparc_compute.run.main? File
“/media/raid/cryosparc/cryosparc_worker/cryosparc_compute/jobs/util
ities/run_check_corrupt_files.py”, line 41, in run_check_particles?
header_prefetch, *_ = prefetch.synchronous_native_read (filenam
e = abspath, header_only=True)? File “cryosparc_master/cryosparc_c
ompute/blobio/prefetch.py”, line 68, in cryosparc_compute.blobio.pr
efetch.synchronous_native_read?RuntimeError: Error ocurred (Invalid
argument) at line 569 in dtype_lookup??Unrecognized data type.??IO
request details:?filename: /group/jalettsgrp/EM_processing/apad
avannil/20230425_AP_MouseMC/Cryosparc_processing/CS-mouse-mc-202304
25-1/J155/imported/009462164134957918822_mousemc_6144_Apr27.mrcs?fi
letype: 0?header_only: 1?idx_start: 0?idx_limit:
mpfactor: 2?eer_numfractions: 40?num_threads: 6?buffer:
nx, ny, nz: 0 0 0?dtype: 0?total_time: -1.000000
-1?eer_upsa
(nil)?

Please suggest.

Thanks

Abhilash.

Hi Abhilash,

Could you please post the output of the following command? My suspicion is that the offending file is either corrupt, or isn’t actually in mrc format.

xxd -g 4 /group/jalettsgrp/EM_processing/apadavannil/20230425_AP_MouseMC/Cryosparc_processing/CS-mouse-mc-20230425-1/J155/imported/009462164134957918822_mousemc_6144_Apr27.mrcs | head -65

Thanks,
Harris