SLURM accounting

Hi all,
The SLURM cluster I am administrating needed to activate accounting, so each users activities would be measured and fairshare in SLURM could applied.
Now this introduced a problem with our cryosparc master setup, because all jobs from cryosparc master needs to run as a particular cryosparc single user.

How could we register usage of the individual users of cryosparc, given the obvious challenges?

The solution was to add the cryosparc user to all the SLURM accounts that wanted to run cryosparc and had licenses for it with “sacctmgr add user …”

Then replace the SLURM lanes in the cryosparc master with lanes where “sbatch” was replaced with “bash” in cluster_info.json.
The should look something like this:

SLURM_ACCOUNT=$(sacctmgr show user | grep  " {{ job_creator }} " | awk '{print $2}')
cat <<EOF > /tmp/${CUSER}_{{ project_uid }}_{{ job_uid }}.sbatch
#SBATCH --job-name ${CUSER}_{{ project_uid }}_{{ job_uid }}
#SBATCH -n {{ num_cpu }}
#SBATCH -p gpu
#SBATCH --gres=gpu:{{ num_gpu }}
#SBATCH --mem={{ (ram_gb)|int }}G
#SBATCH -o {{ job_dir_abs }}/slurm.log
#SBATCH -e {{ job_dir_abs }}/error.log
# Run job
{{ run_cmd }}
sbatch /tmp/${CUSER}_{{ project_uid }}_{{ job_uid }}.sbatch

As you can see this makes a sbatch script in /tmp on the crysparc master server. Which is then run with the bash command from cluster_info.json.
Each cryosparc user must be registered with a “username” a.k.a. {{ job_creator }} that matches their linux username for this to work. Otherwise one might be able to retrieve username from your LDAP/AD/NIS server, by using the crysoarc login email.

This works, but it would have been nicer if cryosparc master could run the job as the individual linux users, through e.g. “sudo -H -u user …”. But unfortunately one cannot do this, because the master database is locked by the cryosparc master user. Hopefully the cryoSPARC team might reconsider @adesgeorges nice suggestion ( to have separate databases per user. This could maybe be setup with ACLs, so the user would have write access to own database, while cryosparc master user would only read, to feed back metadata into web-interface.

Best regards,


Hi @jelka,

Thanks for pointing this out! Creating a multi-user cryoSPARC is something we’re actively designing and developing.

1 Like

To get around this problem, we just created multiple lanes for each User, then ask them to only use that lane. the lane config has the appropriate project in the sbatch script. you do need to add the user that is running the server to each slurm account.


Hi @stephan,

That sounds very interesting. If you need alpha/beta testers just ask.

Thanks for the suggestion @karcaw.
This will also work, but I my case it would result i a lot of lanes.
There is also the question of a system based on trust entirely, versus the more automated/controlled solution I present above.
I guess it is a matter of taste or number of users?

Hi, Thanks for providing the code for this @jelka.
@stephan thanks for all your work, not sure if a new topic is better but would it be possible to add an optional account/group field to users? Atleast until multiuser CryoSPARC becomes available

Do you intent to use this information for cluster usage accounting?
In that case, may I suggest to

  1. include the cryosparc_username template variable in your script template
  2. prepare a map: \text{cryosparc_username} \rightarrow \text{account} supplemental to CryoSPARC
  3. look up a job’s associated cryosparc_username to associate that job with the account

Hi @wtempel , yes this is meant for cluster usage accounting as well as fairshare job allocation. Maybe I’m musinderstanding but for step 2 we already have accounts in slurm based on the groups that users belong to, for example:
Username → Account
user1 → group1
user2 → group1
user3 → group2
user4 → group3

The reason I suggested adding an account field to Cryosparc is more a convenience to avoid this part:

SLURM_ACCOUNT=$(sacctmgr show user | grep  " {{ job_creator }} " | awk '{print $2}')
cat <<EOF > /tmp/${CUSER}_{{ project_uid }}_{{ job_uid }}.sbatch

And just have a variable referenced directly,ie:

#SBATCH -A {{ job_account }}
1 Like

I know I am replying to a message from 3 years ago, but I simply wanted to point out that this was discussed on CCPEM in June this year, starting from this message: JISCMail - CCPEM Archives

It seems like a multi-user cryoSPARC properly integrated with the Linux accounts would benefit many. So, let’s hope this project is still active. :pray:


If this is still in development, could we possibly get an update on progress and possible timeline?


Hi @blixuga and @Guillaume, thanks for following up.
A couple of years ago we did a substantial dive into how a multi-linux-user system design could work, and had come up with some plans. However, we ran into several major stumbling blocks with the security implications and holes that crossing the boundaries of user accounts would create, and we were not able to come to a satisfactory solution. The current status is that we haven’t forgotten about this topic, but it’s also not something we have a timeline for. Thanks!

I wanted to chime in that while the security aspect is a real one, for most academic institutions, it’s more fear that someone will mess up someone else’s data, than that someone will see it. I wonder if just a simple option to set a default directory for new projects, wrapper scripts, file imports, etc, on a per-CryoSPARC user basis would alleviate a large chunk of that, since people wouldn’t accidentally end up in other people’s directories. Just having the file browser open in their directory instead of $HOME would simplify things and reduce the possibility for accidents.

1 Like

Funny how this oldie have been revived.
While we wait for, what apparently is more or less a total rewrite of cryoSPARC to make real multi-user support a reality, I may have some suggestions on how to secure things a bit with current cryoSPARC installations.
We use POSIX ALCs to make a layer of security-by-obscurity, by making a processing directory inside users own workdir. This directory can be created by the user/owner themselves e.g given a cryptic name. On and inside the cryosparc workdir, the cryosparc user is given full RWX permissions with setfacl command, but it is only given Execute (X) permissions for the above directories. Thus, only the user knows the path to the workdir and it cannot be browsed from cryosparc webside - One needs to know the full path, which only the owner can see from Linux (shell or filebrowser).
Does this make sense?