Motioncorr: Failed to check cluster job status! 1 , Command '['qstat','-as','submitted']'returned non-zero exit status 1

I installed v2 on our cluster. It looked successful but I get the message Command ‘[‘qstat’,’-as’,‘submitted’]'returned non-zero exit status 1 and “Failed to check cluster job status! 1” when I run motioncorr. I am not sure what I am doing wrong here. Below is the message on startsup, and the cluster_info.json and cluster_script

cryosparcm restart
CryoSPARC is running.
Stopping cryosparc.
command_proxy: stopped
command_vis: stopped
webapp: stopped
command_core: stopped
database: stopped
Shut down
Starting cryoSPARC System master process…
CryoSPARC is not already running.
database: started
command_core: started
cryosparc command core startup complete.
command_vis: started
command_proxy: started
webapp: started

CryoSPARC master started.
From this machine, access the webapp at
From other machines on the network, access at

Startup can take several minutes. Point your browser to the address
and refresh until you see the cryoSPARC web interface.

more cluster_info.json:
“qdel_cmd_tpl”: “qdel {{ cluster_job_id }}”,
“worker_bin_path”: “/data1/cryosparc_user/software/cryosparc/cryosparc2_worker/bin/cryosparcw”,
“title”: “emm”,
“cache_path”: “/data1/cryosparc_user/cryosparc_cache”,
“qinfo_cmd_tpl”: “qstat -q”,
“qsub_cmd_tpl”: “qsub {{ script_path_abs }}”,
“qstat_cmd_tpl”: “qstat -as {{ cluster_job_id }}”,
“cache_quota_mb”: null,
“send_cmd_tpl”: “{{ command }}”,
“cache_reserve_mb”: 10000,
“name”: “emm”

more :
#!/usr/bin/env bash

cryoSPARC cluster submission script template for PBS

Available variables:

{{ run_cmd }} - the complete command string to run the job

{{ num_cpu }} - the number of CPUs needed

{{ num_gpu }} - the number of GPUs needed.

Note: the code will use this many GPUs starting from dev id 0

the cluster scheduler or this script have the responsibility

of setting CUDA_VISIBLE_DEVICES so that the job code ends up

using the correct cluster-allocated GPUs.

##{{ ram_gb }} - the amount of RAM needed in GB
##{{ job_dir_abs }} - absolute path to the job directory
##{{ project_dir_abs }} - absolute path to the project dir
##{{ job_log_path_abs }} - absolute path to the log file for the job
##{{ worker_bin_path }} - absolute path to the cryosparc worker command
##{{ run_args }} - arguments to be passed to cryosparcw run
##{{ project_uid }} - uid of the project
##{{ job_uid }} - uid of the job
##{{ job_creator }} - name of the user that created the job (may contain spaces)
##{{ cryosparc_username }} - cryosparc username of the user that created the job (usually an email)

What follows is a simple PBS script:

#PBS -N cryosparc_{{ project_uid }}_{{ job_uid }}
#PBS -l select=1:ncpus={{ num_cpu }}:ngpus={{ num_gpu }}:mem={{ (ram_gb*1000)|int }}mb:gputype=K40m
#PBS -o {{ job_dir_abs }}
#PBS -e {{ job_dir_abs }}

for devidx in $(seq 1 16);
if [[ -z $(nvidia-smi -i $devidx --query-compute-apps=pid --format=csv,noheader) ]] ; then
if [[ -z “$available_devs” ]] ; then
export CUDA_VISIBLE_DEVICES=$available_devs

{{ run_cmd }}