Input/output error

Hi Cryosparc Team, I am trying to run the job for Symmetry Expansion. the job tried to complete few steps and and then it throws error stating the input/output error. here is the message i am getting in the log

**Traceback (most recent call last):
  File "cryosparc_master/cryosparc_compute/run.py", line 129, in cryosparc_master.cryosparc_compute.run.main
  File "/anfhome/apps/cryosparc/cryosparc_worker/cryosparc_compute/jobs/utilities/run_symexpand.py", line 126, in run
    outpath_rel = rc.write_all_outputs(dset, 'particles', 'particles', 'particle', 'particles_expanded.cs')
  File "/anfhome/apps/cryosparc/cryosparc_worker/cryosparc_compute/jobs/runcommon.py", line 1182, in write_all_outputs
    dset.save(outpath_abs)
  File "/anfhome/apps/cryosparc/cryosparc_worker/cryosparc_tools/cryosparc/dataset.py", line 769, in save
    with bopen(file, "wb") as f:
  File "/anfhome/apps/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/contextlib.py", line 142, in \__exit_\_
    next(self.gen)
  File "/anfhome/apps/cryosparc/cryosparc_worker/cryosparc_tools/cryosparc/util.py", line 224, in bopen
    with open(file, mode) as f:
OSError: \[Errno 5\] Input/output error**

The error indicates a lower level error during an attempt to write job output to the job directory.

What are the outputs of these commands on the CryoSPARC worker computer where the job failed?

uname -a
df -hT /path/to/project_dir

where /path/to/project_dir/
needs to be replaced with the path that is output by the following command on the CryoSPARC master computer

# replace P99 with actual project ID
cryosparcm cli "get_project_dir_abs('P99')"

Here is the output

\[hpcadmin@cryosparc-master test\]$ uname -a
Linux cryosparc-master 4.18.0-425.3.1.el8.x86_64 #1 SMP Tue Nov 8 14:08:25 EST 2022 x86_64 x86_64 x86_64 GNU/Linux
\[hpcadmin@cryosparc-master test\]$ df -hT /blob/cryosparc_processing/CS-benchmarking
Filesystem     Type  Size  Used Avail Use% Mounted on
blobfuse2      fuse  8.0G     0  8.0G   0% /blob

I haven’t tested this type of filesystem with CryoSPARC.
I this storage related to the blobfuse storage mentioned here and subject to these limitations?
Do you instead have access to an nfs filesystem for the project directory?

We can test the nfs file system. however few weeks ago the same blobfuse was working fine and suddenly we are having this issue.

Does Azure do some weird on-the-fly reallocation of storage based on demand? Because an 8GB drive isn’t going to hold more than a few motion corrected micrographs, let alone a large project…

It mounts 1 PB store onto it but caches 8GB at a time.

And it (always?) reports 0 bytes used when queried in a terminal?

Just when I thought Copilot and the endless stream of screwups with Windows 11 updates were the most ridiculous things Microsoft had done.