On our cluster, the local /scratch space is generated dynamically, and the path is not defined until the job is submitted. Is there a variable that represents the slurm job id? Can environment variables be used within the cluster_info.json, such that they are defined at job runtime?
I have the same issue. I would like to use an environment variable in cluster_info.json to define the SSD path, instead of a fixed path.
Hey @DavidHoover, @Juha,
You can edit your cluster_script.sh to have the line
export CRYOSPARC_SSD_PATH=<function that creates ssd path>
Just make sure you delete the line from cryosparc2_master/config.sh
if you have it set there as well.
Thanks, this worked
.
It seems that after updating to v2.14.2 the SSD path is not set anymore to CRYOSPARC_SSD_PATH.
For some reason, cryoSPARC remembers the CRYOSPARC_SSD_PATH from a previous run. It is set correctly in the submission script to the value of $LOCAL_SCRATCH (dynamically set path on our HPC system) - I have verified this by adding a print statement. Despite this a different path (matching an old run) is used (which makes the job crash as this path is not available anymore).
I have checked cryosparc2_master/config.sh and there is no path for the SSD there.
Hi @Juha, we have a fix for this issue coming in our next release.
In the meantime, I’ve prepared a patched revision of the caching code for you to use. With this patched version, the jobs will always use the value of $CRYOSPARC_SSD_PATH
from cryosparc2_worker/config.sh
.
Download the file from here: https://structura-assets.s3.amazonaws.com/cache_override_v2.14_fix/cache.py
Or from the terminal with wget:
wget https://structura-assets.s3.amazonaws.com/cache_override_v2.14_fix/cache.py
Replace the file in the following folder on the cryoSPARC master node:
cryosparc2_master/cryosparc2_compute/jobs
Then try running the same job again. Let me know if you run into any trouble with this!
FYI, this is now out in the latest cryoSPARC v2.15
Is there something similar to
export CRYOSPARC_SSD_PATH=<function that creates ssd path>
to give each cryosparc job a specific size of scratch to use on a per-job level instead of having the ssdreserve or ssdquota on the per-worker level ?
Hi @thomaspv,
It’s possible, but in the current implementation of the caching system, if you set a quota too small for the job, it may endlessly wait for the cache to become free.
To better understand how I can find a solution for you, can you elaborate on the reason you’d want something like this?
Hi Stephan,
Currently, our cluster creates a folder on a scratch SSD if we request scratch space. However, it does not explicitly limit us to using the space we requested. I think this would make it possible to use more than the requested which could cause problems for other jobs on that node. If there is a way to provide ssdquota for the job, then I could set it to the space I request for the job through the scheduler.
Thanks.
Paul