Is there any way of restricting the total amount of disk space used per user? And related, what happens when a job fails with an “No space left on device” error? I mean this for the shared file system, not the local SSD cache.
David
Is there any way of restricting the total amount of disk space used per user? And related, what happens when a job fails with an “No space left on device” error? I mean this for the shared file system, not the local SSD cache.
David
Hi @DavidHoover,
Unfortunately, this isn’t possible. It might be possible by using OS-level tricks (partitioning a disk?).
This means that a job that is writing out results (e.g. micrographs for a Patch Motion job or particles for an Extract From Micrographs job) tried to write a file to the disk, but failed because there wasn’t any space. The job will fail, but you can clear it and re-run it once the volume is cleared up.
If the job fails and there is no database corruption, then no problem. How does CryoSPARC handle NFS? Does it ever attempt I/O intensive tasks?
That’s correct, it’s harmless. Though I do suspect there will be an issue if the disk the database is hosted on runs out of space; it’s best to keep this on a separate volume than the project directories.
At Structura, we use NFS to mount large storage volumes onto each one of our GPU workstations. We have a 10GB link between the nodes, and we get solid preprocessing performance. Intensive I/O tasks (e.g. random reads required for particle processing in jobs like Non-Uniform Refinement) are reserved for the SSD cache.
Does Structura use NFS v3 or v4?
Our mounts are using v4.2.