We were having troubles with starting cryosparcm with the slurm on the cluster. So instead we used salloc to get to an allocated node for GPU access. For that to work we had to change the CRYOSPARC_MASTER_HOSTNAME with an environment variable in the config.sh, please see below.
This starts cryosparcm fine and we were able to access the user interface with:
ssh -N -L localhost:39000:localhost:39000 -J email@example.com firstname.lastname@example.org
After pasting localhost:39000 into Chrome, we were able to access the user interface fine. But when we check the Resource Manager it shows my jump hostname instead of the target. So errors show up when we try to run any jobs.
Welcome to the forum @parag.
Can you recall the precise install.sh command(s) that was/were used during cryoSPARC installation and on which host(s) they were executed?
Are you attempting to simulate a “single workstation” (combined master/worker) instance?
I think we were attempting to simulate a single workstation.
The host registered during installation is our login node (written as email@example.com above). Should we try to install in the compute node?