Slurm linux user / cryosparc user

Greetings.

Is there any way to get slurm status to report the actual user instead of the “cryosparcuser” account? As we scale up our cluster and cryosparc usage, it would be great to be able to track users via slurm instead of just seeing the cryosparc user account listed for every job. Seeing a giant list of “cryosparcuser” jobs is not terribly useful, and it take time for admins to go into cryosparc and figure out which job belongs to whom.

A suggestion in the context of a related topic:

Interesting.

So if we set {{ cryosparc_username }} in submission scripts, it would show this in squeue output, for example?

That probably depends on

  • with which #SBATCH parameter {{ cryosparc_username }} is used (comment?) and
  • what squeue output options are specified

Cluster jobs would still be associated with the Linux account that runs the CryoSPARC master processes.

If you’re using partitions with SLURM, you could make a lane for each user via cryoSPARC, and have that lane be on the user’s partition. So you could then sort by partition instead of user, and see that. I believe with one of the more recent updates, cryoSPARC added the ability to restrict users to specific lanes.

1 Like

All,

Very good. This is close enough to what we need. I can add the directive {{ cryosparc_username }} in cluster_script.sh and get better visibility for user job mapping.

Thank you for the assistance!

Please feel free to share cluster_script.sh with the forum, if possible.

Of course, glad to share our cluster_script.sh:

#SBATCH --job-name cs_{{ cryosparc_username }}{{ project_uid }}{{ job_uid }}
#SBATCH -n {{ num_cpu }}
#SBATCH --gres=gpu:{{ num_gpu }}
#SBATCH -p defq
#SBATCH --mem={{ (ram_gb*1000)|int }}MB
#SBATCH -o {{ job_dir_abs }}/out.txt
#SBATCH -e {{ job_dir_abs }}/err.txt

available_devs=“”
for devidx in (seq 0 15); do if [[ -z (nvidia-smi -i $devidx --query-compute-apps=pid --format=csv,noheader) ]] ; then
if [[ -z “$available_devs” ]] ; then
available_devs=$devidx
else
available_devs=$available_devs,$devidx
fi
fi
done
export CUDA_VISIBLE_DEVICES=$available_devs

{{ run_cmd }}