Memory specification in slurm script

Hi,

I am trying to submit the job from the master node to worker nodes through slurm script. When I hard code the cluster_script with mem, no. of cpu and no.gpu everything works fine.
When I use the following in the script

#SBATCH -c {{ num_cpu }}
#SBATCH --gres=gpu:{{ num_gpu }}
#SBATCH --mem={{ ram_gb|int }}G

The no.of cpu and gpu parameter get translated to the slurm job submission script as mentioned in cryosparc gui but memory defaults to the max memory of the master node even when I specify 100 or 200 in the GUI.

Is it something wrong that I am doing here? Is there a way to specify other slurm parameters like account, partition etc in cryosparc GUI instead of hard coding in the cluster_script.sh

Thanks,
Asif.

The match with the “max memory of the master node” may be a coincidence, and depend on job type.
What was the job type, and what is “max memory of the master node”.
Custom values set for certain reserved variables, like ram_gb, will not be effective:

Hello Wtempel,

Sorry for the late response. Got involved with other stuff, back again to Slurm script, I see that cryosparc decides the resource requirements based on the job type and the job size.

But what happens if the nodes through the slurm does not satisfy the resource requirement. For example, cryosparc needs 2 gpus but the nodes have only one.

Is there a way to have the time for the slurm job(#SBATCH --time) defined as a custom variable?

Thanks,
Asif.

In my experience, SLURM won’t launch the job because it won’t be able to find a partition with the requested resources. However, regarding GPUs specifically, I’m not aware of a job that requires 2. Any jobs that can use more than one let the user specify the number of GPUs.

Yes, just make up a new variable. For instance, I’m using

#SBATCH --time {{ time_hr_min_sec }}
1 Like

… as described here.