I’ve been working with my university’s computing center to get Cryosparc setup and configured on our cluster. I believe that I have my sample submit script and configuration.json files set up properly, however upon running the T20s benchmark, the patch motion job is interrupted with the following error:
License is valid.
Launching job on lane reichow-cs target reichow-cs ...
Launching job on cluster reichow-cs
====================== Cluster submission script: ========================
==========================================================================
#!/bin/bash
#SBATCH --job-name=cryosparc_P1_J7
#SBATCH --partition=gpu
#SBATCH --account=reichowlab
#SBATCH --output=/home/exacloud/gscratch/reichowlab/P1/J7/job.log
#SBATCH --error=/home/exacloud/gscratch/reichowlab/P1/J7/job.log
#SBATCH --nodes=1
#SBATCH --qos=normal
#SBATCH --mem-per-cpu=8G
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=16
#SBATCH --error=/home/exacloud/gscratch/reichowlab/P1/J7/error.txt
#SBATCH --gres=gpu:4
available_devs=""
for devidx in $(seq 1 16);
do
if [[ -z $(nvidia-smi -i $devidx --query-compute-apps=pid --format=csv,noheader) ]] ; then
if [[ -z "$available_devs" ]] ; then
available_devs=$devidx
else
available_devs=$available_devs,$devidx
fi
fi
done
export CUDA_VISIBLE_DEVICES=$available_devs
srun /home/exacloud/gscratch/reichowlab/local/cryosparc_worker/bin/cryosparcw run --project P1 --job J7 --master_hostname reichow-cs.ohsu.edu --master_command_core_port 39002 > /home/exacloud/gscratch/reichowlab/P1/J7/job.log 2>&1
==========================================================================
==========================================================================
-------- Submission command:
sbatch /home/exacloud/gscratch/reichowlab/P1/J7/queue_sub_script.sh
Failed to launch! 1
I’ve gone back forth a few times with the computing center and corrected an error that I had initially, but this continued problem is stumping me a little. There isn’t much of an obvious error and in the end, Cryosparc isn’t writing out a job.log or error.txt file as I have the sbatch command configured to write out (and has been suggested to post for similar problems that others have faced). Directories and some files (like job.json and queue_sub_script.sh) are written out though.
To narrow this down I think a look at your “cluster_scripts.sh” and “cluster_info.json” files would be nice.
Also, does your your SLURM cluster use cgroups?
Because, then you should leave out all the available_devs/CUDA_VISIBLE_DEVICES stuff, to make sure you only run on allocated GPUs.
The above includes values of nCPUs, nGPUs and RAM usage already set in Cryosparc.
And it gets CUDA_VISIBLE_DEVICES from cgroups (if activated in SLURM) upon job submission, which makes GPU nodes much more modular, as only needed resources are allocated. Making simultaneous runs possible on the GPU nodes.
Thank you - I think your changes to my submit script worked, but I’m not sure if the jobs are queued. I’ve gotten no outputs (yet) but the job is listed in CryoSparc as ‘Launched’ and is in my list of active jobs, though I don’t see it showing up in the queue from the command line yet.