The issue described here still exists depending on the SLURM configuration.
Just wanted to post a different solution, remove -g 0
from $@
and using $SLURM_JOB_GPUS
instead. (Recall you already need a shim to run it on GPU).
Modified deepemhancer_gpu
wrapper shim:
#!/usr/bin/env bash
if command -v conda > /dev/null 2>&1; then
conda deactivate > /dev/null 2>&1 || true # ignore any errors
conda deactivate > /dev/null 2>&1 || true # ignore any errors
fi
unset _CE_CONDA
unset CONDA_DEFAULT_ENV
unset CONDA_EXE
unset CONDA_PREFIX
unset CONDA_PROMPT_MODIFIER
unset CONDA_PYTHON_EXE
unset CONDA_SHLVL
unset PYTHONPATH
unset LD_PRELOAD
unset LD_LIBRARY_PATH
# Remove -g 0 from $@
skip=0
for arg do
case $1 in
-g)
if [ "$2" = 0 ]; then
skip=2
fi
esac
if [ "$skip" -eq 0 ]; then
set -- "$@" "$1"
else
skip=$(( skip - 1 ))
fi
shift
done
. /path/to/miniforge3/etc/profile.d/conda.sh
conda activate /path/to/miniforge3/envs/deepEMhancer_env
exec deepemhancer -g $SLURM_JOB_GPUS $@