Recently I used v2.15 to perform ab initio with 1.7 million particles with box size of 300, but Cryosparc always failed in the middle of the job with “Job process terminated abnormally”. My workstation has 2TB hard drive free space with 256 GB RAM, and v2.15 can run the same job normally a week ago, so I don’t think it failed due to lack of space. This also happened to Heterogeneous Refinement.

I decided to update cryosparc with override flag. When I used cryosparcm stop, it said cryosparc is not running, which was not true; then I killed all cryosparc process and updated with override. However, the “updated” v2.15 gave this error in ab initio:

Traceback (most recent call last):
  File "cryosparc2_worker/cryosparc2_compute/run.py", line 85, in cryosparc2_compute.run.main
  File "cryosparc2_worker/cryosparc2_compute/jobs/hetero_refine/run.py", line 83, in cryosparc2_compute.jobs.hetero_refine.run.run_hetero_refine
  File "cryosparc2_compute/particles.py", line 64, in read_blobs
    u_blob_paths = [os.path.join(proj_dir_abs, p) for p in u_rel_paths]
  File "/home/local/CORNELL/cryosparc/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/posixpath.py", line 73, in join
    path += '/' + b
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe8 in position 69: ordinal not in range(128)

By the way, 2D classification was normal in this installation.

Then we downgraded cryosparc to v2.14.2, and ab initio is back to normal.

Anyone came across the same issue? Any suggestion is appreciated.


Hi @Feng10, does your instance have an SSD enabled? If so, does anything change if you use the “Cache particle images on SSD” option on the Heterogeneous Refinement job?

Can you also tell me the full path to where your project directory is located? You can find it by closing the workspace you’re in and checking the project details in the sidebar:

Hi @nfrasser,

Thanks for your reply.

My workstation has two hard drives, one is SSD and another is HDD. Some of my projects are on SSD drive and some on HDD; the jobs with error is on HDD drive, so I always turned off “cache particles images on SSD”, otherwise the project won’t run at all (am I right?).

The project path is /notbackedup-local2/cryosparc/P22


cryoSPARC supports storing your project on a HDD but using a separate SSD to cache particle stacks, which you would have configured during your initial installation. If you did not then yes, this would apply.

Based on this information it looks like there are some special characters in in the file names of the extracted particles.

To help me further troubleshoot, can you private-message me the cryoSPARC particle blobs dataset file? You can get it from the output from the “Extract Particles” or “Import Particles” job you used to get the particles:

This file only includes information about where the particles are stored on disk, not any raw data. It may be a few hundred MB large, so feel free to use a file sharing service like Dropbox or Firefox Send if you aren’t able to upload it here.