Data Permissions Handling in Multi-User Installation


We are looking to build a CryoSPARC installation on a campus-wide SLURM GPU server. Since this installation will potentially span multiple labs, I was hoping for some clarification on how cryosparc handles read/write permissions at the user level.

I know that the master process is run by a central linux user (we will call “cryosparc-admin”, which will need read/write access to any/all project directories), and access to the filesystem is done under the “mask” of this user account. In this way, if we have users from two different labs (can call them “user1” from “lab1” and “user2” from “lab2”), then they could potentially gain access to each other’s directory trees through this central “cryosparc-admin” linux user.

Is there some way to limit the permissions of independent users? Such that “user1” can only see and create projects under /data/cryosparc_projects/lab1 and “user2” can only do the same under /data/cryosparc_projects/lab2, or will “user2” always be able to see /data/cryosparc_projects/lab1 (and vice versa)?

Thanks in advance.

Hi @sbowerma,

We have just added a section to the CryoSPARC guide which covers how to achieve some degree of data separation between user groups. Here’s a link to it:

In summary, though, you are correct that the cryosparc-admin user will have access to all data on the instance. By working with unix permissions it’s possible to limit shell-based access to other group’s data, and you can limit which users can see a given cryoSPARC project from within the UI (which does establish some degree of separation), but since the cryosparc user can see all the files, one group could create an import movies job, for example, and import data that’s supposed to be owned by another group.

We do plan to eventually address this in a future release but I can’t comment on the exact timeline. In the meantime, depending on how tight your security requirements are, the strategy in the linked guide section may suit your needs.

Another possibility would be to have multiple cryoSPARC instances, each with their own unix account, database, etc. We don’t normally recommend that kind of setup because it tends to introduce other problems (for example, the two instances won’t be aware of which GPUs are currently in use by the other instance), but using SLURM does mitigate those issues to a degree.


1 Like