Migrating from single workstation to master-worker

Hi all,
I’m having some trouble testing out migrating my single workstation instance to a master-worker model. I have two single instances that are beginning to interfere (understandably) due to my duplicate license.

I’d like to add the worker of one of my instances to the master of another, but when I run /cryosparcw connect --worker <worker fqdn> --master <master fqdn> --port <my port> --ssdpath <my ssd cache path> --lane default --newlane --update

I am met with

--------------------------------------------------------------
  CRYOSPARC CONNECT --------------------------------------------
 ---------------------------------------------------------------
  Attempting to register worker <worker fqdn> to command <master fqdn>:<master port + 2>
  Connecting as unix user <myuser>
  Will register using ssh string: <myuser>@<worker fqdn>
  If this is incorrect, you should re-run this command with the flag --sshstr <ssh string> 
 ---------------------------------------------------------------
Traceback (most recent call last):
  File "bin/connect.py", line 79, in <module>
    assert cli.test_connection(), "Could not connect to cryoSPARC master at %s:%d" % (master_hostname, command_core_port)
AttributeError: 'CommandClient' object has no attribute 'test_connection'

I can ssh from my worker node to the master node. Here is some more info about the master:

  • Linux 4.18.0-477.15.1.el8_8.x86_64 #1 SMP Wed Jun 28 15:04:18 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
               total        used        free      shared  buff/cache   available
Mem:          755Gi        19Gi       3.3Gi        56Mi       732Gi       731Gi
Swap:          99Gi       9.9Gi        90Gi

Both master and worker are version 4.4.1. Do I need to somehow detach the worker from it’s master (localhost) in order to connect it to a different master?

Any help would be appreciated!

Welcome to the forum @jpb.

Can you contact the master’s command_core port from the worker:

curl <master fqdn>:<master port + 2>

?

Not necessarily, but you need to consider that, once you successfully “shared” the worker between the instances, the CryoSPARC builtin scheduler of one “sharing” instance would not be aware of any workloads from the CryoSPARC builtin scheduler of another “sharing” instance. This limitation can be overcome by connecting both CryoSPARC instances to a common, external cluster workload manager.

In addition to the version, installed patches must also match between master and worker.
Possible solutions to the license id match requirement are discussed in Licence key mismatch - #4 by wtempel.

Thanks @wtempel

Well. I feel pretty dumb. I was using the database port in my connect command…

It works now when I issue the proper command. Thank you for your help.

SOLUTION: Don’t use the wrong port when doing ./cryosparcw connect --worker <worker> --master <master> --port <port> ...