Recently I used v2.15 to perform ab initio with 1.7 million particles with box size of 300, but Cryosparc always failed in the middle of the job with “Job process terminated abnormally”. My workstation has 2TB hard drive free space with 256 GB RAM, and v2.15 can run the same job normally a week ago, so I don’t think it failed due to lack of space. This also happened to Heterogeneous Refinement.
I decided to update cryosparc with override flag. When I used cryosparcm stop, it said cryosparc is not running, which was not true; then I killed all cryosparc process and updated with override. However, the “updated” v2.15 gave this error in ab initio:
Traceback (most recent call last): File "cryosparc2_worker/cryosparc2_compute/run.py", line 85, in cryosparc2_compute.run.main File "cryosparc2_worker/cryosparc2_compute/jobs/hetero_refine/run.py", line 83, in cryosparc2_compute.jobs.hetero_refine.run.run_hetero_refine File "cryosparc2_compute/particles.py", line 64, in read_blobs u_blob_paths = [os.path.join(proj_dir_abs, p) for p in u_rel_paths] File "/home/local/CORNELL/cryosparc/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/posixpath.py", line 73, in join path += '/' + b UnicodeDecodeError: 'ascii' codec can't decode byte 0xe8 in position 69: ordinal not in range(128)
By the way, 2D classification was normal in this installation.
Then we downgraded cryosparc to v2.14.2, and ab initio is back to normal.
Anyone came across the same issue? Any suggestion is appreciated.