Cryosparc 3.2 and LSF

Hello,

We recently installed cryosparc 3.2 and although the release notes suggest native support for LSF scheduler we are having trouble getting it to work.

cluster_info:
{
“name”: “lilac”,
“title”: “lilac”,
“worker_bin_path”: “/opt/common/cryosparc3.1/cryosparc_worker/bin/cryosparcw”,
“send_cmd_tpl”: “{{ command }}”,
“qsub_cmd_tpl”: “/admin/lsflilac/lsf/10.1/linux3.10-glibc2.17-x86_64/bin/bsub < {{ script_path_abs }}”,
“qstat_cmd_tpl”: “/admin/lsflilac/lsf/10.1/linux3.10-glibc2.17-x86_64/bin/bjobs -l {{ cluster_job_id }}”,
“qdel_cmd_tpl”: “/admin/lsflilac/lsf/10.1/linux3.10-glibc2.17-x86_64/bin/bkill {{ cluster_job_id }}”,
“qinfo_cmd_tpl”: “/admin/lsflilac/lsf/10.1/linux3.10-glibc2.17-x86_64/bin/bqueues”,
“cache_path”: “/scratch”,
“cache_quota_mb”: null,
“cache_reserve_mb”: 10000
}

cluster_script.sh

[cryosparc_user3@pearl command_core]$ cat cluster_script.sh
#!/bin/bash
#BSUB -J cryosparc_{{ project_uid }}{{ job_uid }}{{ cryosparc_username }}
#BSUB -q gpuqueue
#BSUB -e {{ job_dir_abs }}/%J.err
#BSUB -o {{ job_dir_abs }}/%J.out
#BSUB -n 1
#BSUB -R “span[ptile=1]”
#BSUB -R “rusage[mem={{ ram_gb }}]”
#BSUB -gpu “num=1:j_exclusive=yes:mode=shared”
#BSUB -W 36:00

##Load modules

{{ run_cmd }}

Error:

subprocess.CalledProcessError: Command ‘[’/admin/lsflilac/lsf/10.1/linux3.10-glibc2.17-x86_64/bin/bsub’, ‘<’, ‘/data/hite/IP3R/P3/J9/queue_sub_script.sh’]’ returned non-zero exit status 255.

Please advise

Thanks
Neeraj

Hi @neeraj,

Do you think you can log onto the master node as the user account that runs cryoSPARC and run the command directly: /admin/lsflilac/lsf/10.1/linux3.10-glibc2.17-x86_64/bin/bsub < /data/hite/IP3R/P3/J9/queue_sub_script.sh and see if you can get more information?

Stephan
Yes that works without a problem. Job starts and cryosparc is able to track the process after the job starts. That makes me wonder if the “<” redirection is again not understood by cryosparc. I remember this was fixed in the v2.15.2-live_privatebeta

Thanks
Neeraj

Hi @neeraj,

My apologies, it looks like you’re right. For some reason, the patch we applied didn’t get merged into the master build. I’ll make sure it’s back in for the next release. For the time being, you’re going to have to manually edit the file cryosparc_command/command_core/__init__.py near line 2283:

try:
    res = subprocess.check_output(shlex.split(cmd), stderr=subprocess.STDOUT).decode()

to

try:
    res = subprocess.check_output(shlex.split(cmd), stderr=subprocess.STDOUT, shell=True).decode()

Stephan,

Unfortunately that did not work for me. What worked was below

res = subprocess.check_output(cmd, stderr=subprocess.STDOUT, shell=True).decode()

Thanks
Neeraj

Hi @neeraj,

This has been patched in v3.2.0+210629. Thanks!