Dump_workspaces "No space left on device" error

We’ve recently run into an issue that I believe was triggered by running up to our quota limit for the CryoSPARC project directory. Despite clearing ~1 TB space in the project directory we are unable to pause the live session that was running at the time - the errors in the logs are along the lines of:

2023-09-08 13:33:39,088 dump_workspaces      INFO     | Exporting all workspaces in P10...
2023-09-08 13:33:39,113 wrapper              ERROR    | JSONRPC ERROR at dump_workspaces
2023-09-08 13:33:39,113 wrapper              ERROR    | Traceback (most recent call last):
2023-09-08 13:33:39,113 wrapper              ERROR    |   File "/path/to/cryosparc_master/cryosparc_command/commandcommon.py", line 195, in wrapper
2023-09-08 13:33:39,113 wrapper              ERROR    |     res = func(*args, **kwargs)
2023-09-08 13:33:39,113 wrapper              ERROR    |   File "/path/to/cryosparc_master/cryosparc_command/commandcommon.py", line 261, in wrapper
2023-09-08 13:33:39,113 wrapper              ERROR    |     return func(*args, **kwargs)
2023-09-08 13:33:39,113 wrapper              ERROR    |   File "/path/to/cryosparc_master/cryosparc_command/command_core/__init__.py", line 4984, in dump_workspaces
2023-09-08 13:33:39,113 wrapper              ERROR    |     json.dump(all_workspaces, openfile, indent=4, default=json_util.default)
2023-09-08 13:33:39,113 wrapper              ERROR    | OSError: [Errno 28] No space left on device

After clearing the session and then trying to “Start Session” we get the following error:

Unable to start session: ServerError: Error for "dump_workspaces" with params ('P10',): ServerError: [Errno 28] No space left on device Traceback (most recent call last): File "/path/to/cryosparc_master/cryosparc_command/commandcommon.py", line 195, in wrapper res = func(*args, **kwargs) File "/path/to/cryosparc_master/cryosparc_command/commandcommon.py", line 261, in wrapper return func(*args, **kwargs) File "/path/to/cryosparc_master/cryosparc_command/command_core/__init__.py", line 4985, in dump_workspaces json.dump(all_workspaces, openfile, indent=4, default=json_util.default) OSError: [Errno 28] No space left on device

The issue appears to persist despite restarting the cryosparcm process.

We would appreciate any advice on how to debug or resolve this issue!

Please can you confirm there is enough space on storage(s) for all of

  • the P10 project directory
  • /path/to/cryosparc_master
  • $CRYOSPARC_DB_PATH

In addition to df, you may need to also confirm that there no additional barriers, like quota on storage servers and clients.

  • the P10 project directory: ~1 TB free quota
  • /path/to/cryosparc_master ~18 GB free space
  • $CRYOSPARC_DB_PATH - sharing the same quota as /path/to/cryosparc_master

I’m not aware of any additional barriers on quota.

FYI this is installed on a Slurm/AWS ParallelCluster setup (CryoSPARC v4.3.1)

I’m still able to write (outside of CryoSPARC) to the volume containing the project directory. From the error message I assume this is attempting to write to the project directory?

Was there 1TB of space available in the project directory at the time of that message? Are there newer messages (that may point to a new problem, for example with a different project directory)?

Yes, there was 1TB of space reported by df -h for that project directory at the time of the message. After clearing the session, the only error messages we see reported are the similar message upon attempting to start a new live session. There are only 3 active projects in the CryoSPARC instance, all under the same volume/quota.

Unfortunately I have no other explanation for

than the storage for the P10 project directory having reached some sort of capacity limit that may not be obvious from df output. Maybe the project directory is not were it is thought to be?

cryosparcm cli "get_project_dir_abs('P10')"
1 Like