Moving to a new master-worker set up is actually quite easy. The only difficult situation here is that it is important you stop the cryoSPARC application as well as the database process before you move it/connect it to another cryoSPARC application.
The first step would be to stop cryoSPARC, and check that no zombie processes are running in the background:
ps -ax | grep "supervisord" #check for any orphaned cryoSPARC-related processes that are running and kill them
The second step would be set up the new server you’d like to run the master node on based on the prerequisites and installation instructions found here.
- Create a user account (i.e., cryosparcuser ) that will run the cryoSPARC application on the master node and each worker node that will be used for computation.
- Set up SSH access between the master node and each standalone worker node. The
cryosparcuser account should be set up to be able to SSH without password (using a key-pair) into all non-cluster worker nodes.
- Ensure ports 39000 - 39010 (configurable) are accessible between every workstation node and the master node.
- Ensure a shared file system is present across the master node and every worker node to store raw data and the project directories.
Once this is complete, mount the network disk that contains the database and the project directories onto this server.
You can then install cryoSPARC as you normally would (using the Master and Worker installation guides), and ensure to specify the path to your database directory. Follow the migration tutorial to modify any locations inside the database as necessary.
You can delete your old cryoSPARC instance at this point.
In terms of separating data access, having two cryoSPARC master instances (with separate databases) is definitely the way to go. The master instance is a very light-weight web server and doesn’t take much resources. You can set up your workers to be workers for both instances- at that point, its up to the users to coordinate scheduling jobs, as the cryoSPARC instances don’t know about each other, and the internal job scheduler doesn’t take into account other applications running on the workstations. If however you were to set up your compute resources to be managed by a cluster scheduler like SLURM, cryoSPARC will integrate with it nicely (this is our preferred system architecture for system administrators with data seperation in mind).
You can always add users and promote them to an “Administrator” role. Although for migration purposes it’s best to do this after you migrate to minimize the amount of things that can go wrong. Also, cryoSPARC v2.12 comes out soon- with it, you will get a new User Management system that will let you create and promote users easily via the UI. I’ll link a tutorial here when it’s out.