Limiting simultaneous submitted/running/queued jobs per user

Wondering if it is possible to limit the number of simultaneous jobs users can submit in the cryosparc interface?

We currently run both a worker and a cluster configuration. Since all cryosparc jobs run under the same user, we cannot limit further on the Slurm side.

@vatese The {{ cryosparc_username }} template variable can be used in the cluster submission script as a comment or similar job submission parameter. Can slurm be configured to parse that information and include it in the resource allocation formula?

Hi @wtempel,

Indeed we extract the username from the submit script and use it for our accounting and metrics. Nevertheless, we require the simultaneous job limitation to also extend for the Cryosparc workers that are not in Slurm. Cryosparc seems to allow jobs to be submitted on a firs come first served basis unless the priorities get changed for each user individually, which does not prevent a single user from hogging the whole cryosparc instance. Would by awesome if cryosparc could offer a per user job quota to prevent this.

Hope this make sense.

@vatese
Am I correctly assuming that placing all workers under slurm control, possibly with multiple queues, is not an option?
In that case, the owner of the cryoSPARC instance can

  1. assign and modify user default priorities via GUI or cli: set_user_default_priority(email_address, priority)
  2. (dis-)allow users’ access to priority settings
  3. (*) modify jobs’ priorities via a low-level database update

Perhaps a combination of periodic usage analysis and updates of these settings according to a custom formula can ensure fairer access to the workers?

Unfortunately that will not work for us. We have 20+ users and more are starting to use cryosparc. Slurm already has a QoS in place for cryosparc and we extract the user from the job for accounting and fairshare. Nevertheless, that only affects the Slurm side, user’s can still submit to the dedicated workers with no limitation. It would be good to have a setting to limit the maximum number of queued/running jobs straight from Cryosparc.

Cheers and thanks.

Greetings!

I’m wondering if there has been any progress on this? As our cryosparc user base grows, it becomes increasingly difficult to manage the user allocations.

Since SLURM respects only usage restrictions based on the unix user, we cannot set a limit on the “cryosparcuser” that is used for all cryosparc jobs. Yes, we can view who is running a job using the {{ cryosparc_username }} field, but we cannot use this to ask SLURM to limit the number of jobs per unix user. In this scenario, a single user can use our whole cluster, and this is obviously not ideal.

It seems to me that if we are going to keep using a single user for cryosparc jobs, perhaps cryosparc should integrate at least some simple controls for its user management database and job limits per use since those are all managed internally.

Appreciate any progress on this !

What we are doing to mitigate this, is making different SLURM accounts, putting resource constraints on those accounts, and then adding each one as a lane, and using cryoSPARC’s lane management to ensure that users can only see their own lane. Yes, jobs all get submitted as cryosparcuser, but you can specify #SBATCH --account=<slurm account with associated resource limits>, and you can have as many cluster submission scripts/lanes as you like.