Hi.
I am trying to install cryoSPARC on our cluster (Centos 7.2, SLURM). As I have read the instructions I have gathered that for security reasons I must install the master once per user which as I understand it is the recommended way to set it up post v2.0. In the thread (Multi user installation?) you state that the master process is the only one that needs to connect to the license server. But in the cluster installation instructions (https://cryosparc.com/docs/reference/install/) it is stated that I need to state a license. Is this only for installation purposes? i.e. the license that will be used is the one of the individual user anyway?
We also can not have the master process running indefinitely, so my other concern is wether or not the workers require a running master once the worker process has been started through SLURM? Or can the master be shut down and then reconnect to the worker-process once that job has finished?
We’re in a similar situation, so I’ll be interested in any responses. I’ll answer some of your questions to the best of my understanding and let others correct me if I’m mistaken.
The license key is contained in both the master and worker config.sh files. I don’t know exactly how often cryosparc calls home to verify the license (if at all), but it’s pretty clear that individual users need their own cryosparc installs, with a different license key provided at installation. This does eat up storage quickly, especially with the big conda installation within cryosparc.
The master process does need to be running when using cryosparc. I know some people tryo to run the master as a long-running slurm job. This has a few problems: (1) the workers need to know the hostname where the master is running, so this has to be updated each job; and (2) the job would usually get killed by slurm and could leave the database and workspaces in an unclean state. In the end we gave up and just gave tacit blessing for running the master process on the login nodes. It’s basically just a webserver and so it doesn’t take up too much resources.
Either way you need to establish a system for avoiding port conflicts. We do this by assigning a unique port to each cryosparc user that they should use for the server (actually a range of 10 ports, since some connections are made on successive ports above the HTTP server).
If this seems like a non-ideal setup, I agree! I would welcome suggestions for improvements.
We also can not have the master process running indefinitely
Does this mean users are not able to run a process as their Unix user for more than a certain period of time? Does this only apply to processes ran on the login node, or on a worker node?
…and just gave tacit blessing for running the master process on the login nodes.
Is there a more ideal solution to this issue? Would you rather have a headless cryoSPARC mode?
We’re currently in the design phases for a cryoSPARC “Hub” which is meant for facilities that support multiple semi-trusted users in a quota-based environment. It’s intention is to allow sys-admins to create and manage cryoSPARC instances across their HPC environment (i.e. spawn a new instance on demand using a simple UI), enable a single-point of entry to the cryoSPARC interface, and allow users to spawn processes on the cluster as their own user account.
You can see where the problem starts if users aren’t able to run the master process at all!
This would be ideal. One central installation maintained by the sysadmin, and then users launch HPC jobs from the web interface. It will require careful thinking about permissions though. I’d be reluctant to run cryosparc as root, but I’m not sure how to give it access the results of all users. Maybe all data is owned by a central cryosparc group and group readable?
Hi @stephan@sbliven.
I agree that a single installation would be ideal from a sysadmin perspective, only having to install the software once regardless of how many users are accessing it would be saving time and effort in the installation and maintenance of the installation. Checking that the user have a valid license would be done in the spawning process I assume?
We have the additional complication that we also have parts of our installation that is for sensitive data. The users of that part have their own login nodes so from our perspective this would mean that those users (or usergroups) would be running their own instance of the server (and database) on those login nodes. The installed software would however be the same as mentioned above.
Running CryoSPARC as root would be less than ideal and not something that we would like to do. we can have a separate user for the spawning process and installation, just not with root-privileges.
As I read your comment sarulthasan the idea with the “Hub” solution is to have one instance per user (web server, database, and all) spawned from the Hub installation and left running for as long as needed by the user. Is this a correct interpretation? This could be a working solution for us as we can have relative light-weight processes running on nodes off-loading the front-end. But even these have to be shut down at some point. Obviously these could be left running until all of the submitted jobs have finished, so as long as these processes can be restarted this approach could work.
From a HPC-perspective, the most important things are to have a single installation, and to be able to guarantee that the users data are separated at least on unix-permission level (for non-sensitive data)
I look forward to hearing more from you on the development of the Hub-solution. Let me know if I can be of help with HPC-related issues.
We are currently testing install options to provide cryosparc for our cluster users and we would appreciate a ‘single installation - multiple instances’ solution.