Hi @robukowski,
The variables ram_gb
, num_gpu
, and num_cpu
are actually meant to be unchangeable values set by CryoSPARC when submitting jobs to the cluster representing the job’s estimated required resources to run. Any custom variables using these names will have their values overwritten.
However, it is still possible to modify the requested resources in a cluster submission script using custom variables.
Below is a SLURM cluster configuration where the value for the amount of RAM required is modified by a ram_gb_multiplier
custom variable.
#!/usr/bin/env bash
#SBATCH --job-name=cryosparc_{{ project_uid }}_{{ job_uid }}
#SBATCH -n {{ num_cpu }}
#SBATCH --gres=gpu:{{ num_gpu }}
#SBATCH --partition=slurmpar
## Example of a modifier variable
#SBATCH --mem={{ (ram_gb|float * ram_gb_multiplier|float)|int }}G
#SBATCH --output={{ job_log_path_abs }}
#SBATCH --error={{ job_log_path_abs }}
available_devs=""
for devidx in $(seq 0 15);
do
if [[ -z $(nvidia-smi -i $devidx --query-compute-apps=pid --format=csv,noheader) ]] ; then
if [[ -z "$available_devs" ]] ; then
available_devs=$devidx
else
available_devs=$available_devs,$devidx
fi
fi
done
echo "AVAILABLE DEVICES: $available_devs"
export CUDA_VISIBLE_DEVICES=$available_devs
echo {{ instance_test_var }}
{{ run_cmd }}
The important line,
#SBATCH --mem={{ (ram_gb|float * ram_gb_multiplier|float)|int }}G
multiplies a user-controlled custom variable with the CryoSPARC job’s original estimation for required RAM, and submits the result to the cluster. This can be done similarly for num_cpu
and num_gpu
by using additive modifiers.
The documentation page Guide: Configuring Custom Variables for Cluster Job Submission Scripts - CryoSPARC Guide is incorrect at the time of writing this in that it uses ram_gb
, num_gpu
, and num_cpu
as examples of modifiable custom variables, and I apologize for the confusion caused. It will be updated soon to clarify the internal CryoSPARC variables during cluster script submission.