Environment
- cryoSPARC version
cryosparcm status
Current cryoSPARC version: v3.0.1
It says “cryosparcm is not running” but it IS running. Is that normal?
- CUDA version
cat cryosparc2_worker/config.sh, check the path to “CUDA PATH”
export CRYOSPARC_CUDA_PATH="/usr/local/cuda-10.2"
- OS (Ubuntu, CentOS, etc)
Red Hat Enterprise Linux Server release 7.9 (Maipo)
- Any other relevant configurations (cluster, headless, etc)
Cryosparc is installed in standalone mode.
$ ./cryosparcm icli
Python 3.7.8 | packaged by conda-forge | (default, Jul 31 2020, 02:25:08)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.19.0 -- An enhanced Interactive Python. Type '?' for help.
connecting to ai-hpcgpuvrc2.niaid.nih.gov:61002 ...
cli, rtp, db, gfs and tools ready to use
In [1]: {t['hostname']:t['resource_slots']['GPU'] for t in cli.get_scheduler_targets()}
Out[1]: {'ai-hpcgpuvrc2.niaid.nih.gov': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]}
$ ../../cryosparc_worker/bin/cryosparcw gpulist
Detected 10 CUDA devices.
id pci-bus name
---------------------------------------------------------------
0 0000:3D:00.0 Quadro RTX 6000
1 0000:3E:00.0 Quadro RTX 6000
2 0000:3F:00.0 Quadro RTX 6000
3 0000:40:00.0 Quadro RTX 6000
4 0000:41:00.0 Quadro RTX 6000
5 0000:60:00.0 Quadro RTX 6000
6 0000:61:00.0 Quadro RTX 6000
7 0000:62:00.0 Quadro RTX 6000
8 0000:63:00.0 Quadro RTX 6000
9 0000:64:00.0 Quadro RTX 6000
---------------------------------------------------------------
Issue
There is no “Run on specific GPU” tab option in Cryosparc v3.0.1 under Job Builder -> Queue. I, also, do NOT see this oprion on another node with standalone Cryosparc v2.15.0 installed (with 8 x Tesla K80 GPUs).
In addition, I keep seeing “Failed to connect link: HTTP Error 502: Bad Gateway” when running “./cryosparcm cli “get_gpu_info()” && cryosparcm log command_core” command.
Any help would be greatly appreciated!