Can my worker node be my gpu node?

I am attempting to download cryosparc onto my local cluster. I have successfully downloaded and installed the master software, and am now attempting to connect my worker node.

I am unsure how to find my worker host name. Is this my GPU node? If so, how do I find the IP or the address to designate this as my worker node? On the cluster, can my worker node be the same as my master node?

@renee If you are planning to queue cryoSPARC jobs to an existing cluster workload manager, like slurm, you may follow the cluster section of the guide. Worker node hostnames may not be directly relevant in this case.
If, on the other hand, the cryoSPARC master will control resource allocation, the managed worker section applies. In this case, the hostname of each worker can be obtained by the hostname -f command. It must also be ensured that the network is configured such that the master can contact the worker/GPU node using the resulting hostname. This connection may fail if the hostname on the worker node is misconfigured and/or inconsistent with your network’s address resolution (such as DNS) configuration.
Both the cluster and managed worker cases require a functional installation of cryoSPARC worker software, possibly shared, on all relevant GPU nodes.
A cluster node can under some circumstances be more simply configured as a cryoSPARC “Single Workstation” instance, but that wouldn’t be optimal when the cryoSPARC instance and a multi-node GPU cluster are shared by many users.

1 Like

Hi @wtempel, thank you for your reply. So, for a cluster configuration, I need to follow the “Master Node Only” instructions, and then the “GPU Worker Node cryoSPARC installation”, then skip the “Connect a Managed Worker to CryoSPARC” and go directly to “Connect a Cluster to CryoSPARC”.

What I have been advised to do by my cluster manager is to install cryoSPARC into my personal directory on the cluster. Currently that folder has both cryosparc_master and cryosparc_worker. I am getting hung up on master_hostnames and worker_hostnames.

Thanks again for your help.

@renee Please share links to relevant cryoSPARC guide sections with your cluster manager, if you haven’t already. You’ll want the help of your cluster manager for this part of the cryoSPARC configuration with:

  • identifying a host that should run the master process(es). There should be one master per cryoSPARC “instance”, which can control multiple workers directly and/or can be integrated with multiple clusters.
    • Master process(es) need to be allowed to run continuously during processing tasks, even when those processing tasks are executed on a different (GPU/worker) host. One should not allow master processes to be disrupted; they should be shutdown “cleanly” using cryosparcm stop and only when there are no processing tasks running.
    • Certain network ports on the master need to be accessible, but port access must be limited to trusted networks.
  • configuring cluster scripts and templates for your specific scheduler and cluster.

Your cluster manager will also be able to help with the relevant hostnames.