Migration to new server with configuration change

Hi,

Our workstation with a standalone cryosparc setup is broken and I would like to switch to a new server with a master-worker setup using the ‘old’ database. We cannot run ‘cryosparcm stop’ as suggested in the migration tutorial at this point, however, the complete installation and the database is on two network disks. Therefore, I wanted to ask:

  1. How can I migrate our database/installation to a new server AND switch to a master-worker setup?

  2. We have several labs using cryosparc and we want to separate data access. Is it possible to have 2 different masters (2 different servers) accessing the same cluster for work tasks (i.e. the same workers)? I have seen that the workers need LICENSE_ID="<license_id>". Do we need to use the same license key? Does every master need a different license key and in turn, can a worker only communicate with one master? Should we use different submit nodes for the 2 masters to circumvent trouble? Of course, nobody wants to have 2 masters.

  3. On the side, can we change the cryosparc user with the new installation or is it better to keep the ‘old’ user as cryosparc user as we want to migrate the old database? Probably, we will keep the ‘old’ user but it is good to know in case, we need to change.

Cheers,
christian

Hi @cbiertue,

Moving to a new master-worker set up is actually quite easy. The only difficult situation here is that it is important you stop the cryoSPARC application as well as the database process before you move it/connect it to another cryoSPARC application.

The first step would be to stop cryoSPARC, and check that no zombie processes are running in the background:
cryosparcm stop
ps -ax | grep "supervisord" #check for any orphaned cryoSPARC-related processes that are running and kill them

The second step would be set up the new server you’d like to run the master node on based on the prerequisites and installation instructions found here.
Essentially:

  1. Create a user account (i.e., cryosparcuser ) that will run the cryoSPARC application on the master node and each worker node that will be used for computation.
  2. Set up SSH access between the master node and each standalone worker node. The cryosparcuser account should be set up to be able to SSH without password (using a key-pair) into all non-cluster worker nodes.
  3. Ensure ports 39000 - 39010 (configurable) are accessible between every workstation node and the master node.
  4. Ensure a shared file system is present across the master node and every worker node to store raw data and the project directories.

Once this is complete, mount the network disk that contains the database and the project directories onto this server.

You can then install cryoSPARC as you normally would (using the Master and Worker installation guides), and ensure to specify the path to your database directory. Follow the migration tutorial to modify any locations inside the database as necessary.

You can delete your old cryoSPARC instance at this point.

In terms of separating data access, having two cryoSPARC master instances (with separate databases) is definitely the way to go. The master instance is a very light-weight web server and doesn’t take much resources. You can set up your workers to be workers for both instances- at that point, its up to the users to coordinate scheduling jobs, as the cryoSPARC instances don’t know about each other, and the internal job scheduler doesn’t take into account other applications running on the workstations. If however you were to set up your compute resources to be managed by a cluster scheduler like SLURM, cryoSPARC will integrate with it nicely (this is our preferred system architecture for system administrators with data seperation in mind).

You can always add users and promote them to an “Administrator” role. Although for migration purposes it’s best to do this after you migrate to minimize the amount of things that can go wrong. Also, cryoSPARC v2.12 comes out soon- with it, you will get a new User Management system that will let you create and promote users easily via the UI. I’ll link a tutorial here when it’s out.

Thank you for your quick reply @stephan,

Our server totally crashed, so there is no job running (it is still down with a hardware problem). I can see the mongo lock file in the cryosparc directory. Can I just delete this and continue with the procedure you explained or would it be better to run the ‘cryosparcm stop’ command to somehow close the database? Does the stop only removes active processes or does it do anything to the db? At the time of the server crash, no cryosparc job was running.

christian

Hi @cbiertue,

Apologies for the late reply, I’m not sure if you’ve overcome this step, but yes, it is safer to run cryosparcm stop, as it’ll try to stop the mongo process gracefully. If that doesn’t work, you should first ensure there are no mongo processes running (ps -ax | grep "mongo") and kill them if there are (kill <PID>). If that doesn’t work, you can delete the mongo....-XXX.sock file in the /tmp folder, and then only delete the .lock file as a last resort.