Dynamically generated local scratch

On our cluster, the local /scratch space is generated dynamically, and the path is not defined until the job is submitted. Is there a variable that represents the slurm job id? Can environment variables be used within the cluster_info.json, such that they are defined at job runtime?

I have the same issue. I would like to use an environment variable in cluster_info.json to define the SSD path, instead of a fixed path.

Hey @DavidHoover, @Juha,

You can edit your cluster_script.sh to have the line
export CRYOSPARC_SSD_PATH=<function that creates ssd path>

Just make sure you delete the line from cryosparc2_master/config.sh if you have it set there as well.

Thanks, this worked
.

It seems that after updating to v2.14.2 the SSD path is not set anymore to CRYOSPARC_SSD_PATH.

For some reason, cryoSPARC remembers the CRYOSPARC_SSD_PATH from a previous run. It is set correctly in the submission script to the value of $LOCAL_SCRATCH (dynamically set path on our HPC system) - I have verified this by adding a print statement. Despite this a different path (matching an old run) is used (which makes the job crash as this path is not available anymore).

I have checked cryosparc2_master/config.sh and there is no path for the SSD there.

Hi @Juha, we have a fix for this issue coming in our next release.

In the meantime, I’ve prepared a patched revision of the caching code for you to use. With this patched version, the jobs will always use the value of $CRYOSPARC_SSD_PATH from cryosparc2_worker/config.sh.

Download the file from here: https://structura-assets.s3.amazonaws.com/cache_override_v2.14_fix/cache.py

Or from the terminal with wget:

wget https://structura-assets.s3.amazonaws.com/cache_override_v2.14_fix/cache.py

Replace the file in the following folder on the cryoSPARC master node:
cryosparc2_master/cryosparc2_compute/jobs

Then try running the same job again. Let me know if you run into any trouble with this!

FYI, this is now out in the latest cryoSPARC v2.15