Hi!
I am currently trying to run cryoSPARC on a node in a computing cluster, using a SLURM sbatch script. The software will probably be installed as a regular cluster install in the future, but for now I have to run it as a single workstation. I have installed the master and worker on the login node and everything works fine when I run it there. Since I do not want to run calculations on the login node I want to ask for a compute node each time I use the software. However, when I submit my script via sbatch, cryosparcm stops at:
Starting cryoSPARC System master process…
CryoSPARC is not already running.
database: started
command_core: started
The script I am using looks like this:
#!/bin/bash
#SBATCH -A xxx
#SBATCH --gres=gpu:k80:1
#SBATCH -t 01:00:00
module load GCC/7.3.0-2.30
module load CUDA/9.2.88
source ~/.bashrc
eval $(cryosparcm env)
cryosparcm start
cd /path/to/cryosparc2_worker
bin/cryosparcw connect --worker $HOSTNAME --master $HOSTNAME --ssdpath /scratch
Do you have any idea what might be wrong? I am open for any and all suggestions!