Installation and conguration on Rocky Cluster with compute nodes and gpu nodes

Welcome to the forum @anil.misal.

You may achieve this by running CryoSPARC instances running under separate Linux accounts. These CryoSPARC instances require individual

  • unique (external to, mounted into the container) storage for each CryoSPARC instance’s database
  • non-overlapping CryoSPARC master port ranges ($CRYOSPARC_BASE_PORT), separated by at least 10 ports between CryoSPARC instances
  • unique (external to, mounted into the container) storage for each CryoSPARC instance’s cryosparc_master/run/ directory
  • unique license IDs

Other components may be shared.

For the cryosparc_worker/ installation, one may remove the CRYOSPARC_LICENSE_ID definition from cryosparc_worker/config.sh and instead define CRYOSPARC_LICENSE_ID inside the slurm script template.
A common cryosparc_master/ installation may be maintained via a common container (docker, podman, etc) image, where

  • definitions for the CRYOSPARC_MASTER_HOSTNAME, CRYOSPARC_DB_PATH, CRYOSPARC_BASE_PORT and CRYOSPARC_LICENSE_ID are removed from the cryosparc_master/config.sh file inside the container and the variables instead defined uniquely for each Linux user-specific container instance via the container platforms environment management
  • two volumes unique/specific to the Linux user and CryoSPARC instance are mounted at ${CRYOSPARC_DB_PATH} and cryosparc_master/run/ (mount points inside the container instance), respectively.