SSD resource configuration issues on cluster compute nodes

Because the local SSD hard drive size of the compute nodes on the cluster is limited, and there are multiple instances, it will be very slow without SSD acceleration.

I added the SSD to the Slurm resource pool, and requesting the corresponding size allows for normal scheduling. There’s only one question: what variable can I use to get the size of the job file to be cached? I’ve written it in the script.

Please answer this question as soon as possible; the corresponding examples and parameters cannot be found in the documentation.

who can answer me question?

@ltf Please review this guide section about an estimate.
If the estimate changes from job to job, you may find configuring a job-level cluster custom variable inside the script template useful.