Missing "Run on specific GPU" tab in v3.0.1

Environment

  • cryoSPARC version cryosparcm status

Current cryoSPARC version: v3.0.1

It says “cryosparcm is not running” but it IS running. Is that normal?

  • CUDA version cat cryosparc2_worker/config.sh, check the path to “CUDA PATH”

export CRYOSPARC_CUDA_PATH="/usr/local/cuda-10.2"

  • OS (Ubuntu, CentOS, etc)

Red Hat Enterprise Linux Server release 7.9 (Maipo)

  • Any other relevant configurations (cluster, headless, etc)

Cryosparc is installed in standalone mode.

$ ./cryosparcm icli
Python 3.7.8 | packaged by conda-forge | (default, Jul 31 2020, 02:25:08)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.19.0 -- An enhanced Interactive Python. Type '?' for help.

 connecting to ai-hpcgpuvrc2.niaid.nih.gov:61002 ...
 cli, rtp, db, gfs and tools ready to use

In [1]: {t['hostname']:t['resource_slots']['GPU'] for t in cli.get_scheduler_targets()}
Out[1]: {'ai-hpcgpuvrc2.niaid.nih.gov': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]}
$ ../../cryosparc_worker/bin/cryosparcw gpulist
  Detected 10 CUDA devices.

   id           pci-bus  name
   ---------------------------------------------------------------
       0      0000:3D:00.0  Quadro RTX 6000
       1      0000:3E:00.0  Quadro RTX 6000
       2      0000:3F:00.0  Quadro RTX 6000
       3      0000:40:00.0  Quadro RTX 6000
       4      0000:41:00.0  Quadro RTX 6000
       5      0000:60:00.0  Quadro RTX 6000
       6      0000:61:00.0  Quadro RTX 6000
       7      0000:62:00.0  Quadro RTX 6000
       8      0000:63:00.0  Quadro RTX 6000
       9      0000:64:00.0  Quadro RTX 6000
   ---------------------------------------------------------------

Issue

There is no “Run on specific GPU” tab option in Cryosparc v3.0.1 under Job Builder -> Queue. I, also, do NOT see this oprion on another node with standalone Cryosparc v2.15.0 installed (with 8 x Tesla K80 GPUs).

In addition, I keep seeing “Failed to connect link: HTTP Error 502: Bad Gateway” when running “./cryosparcm cli “get_gpu_info()” && cryosparcm log command_core” command.

Any help would be greatly appreciated!

@ponomarevsy what kind of job are you trying to run? Not every job requires GPUs and will not include the option to queue to a specific GPU. Is the Specific GPU option also missing when queuing a Patch Motion or Ab-Initio job?

Something else to try is to add the following line to cryosparc_worker/config.sh (looks like it’s missing from yours)

export CRYOSPARC_USE_GPU=true

Then reconnect your worker with the cryosparcw connect command (use the --update flag)

The “cryosparc is not running” message is concerning - this can happen if cryoSPARC’s installation location changed on disk or if cryoSPARC was started from a symbolically-linked directory. To fix this, try the following:

  1. In a command line, look for running cryoSPARC processes by entering this command:
    ps ax | grep cryosparc
    
  2. If you see any lines that contain supervisord, python or meteor mongo, look for the process ID, a number at the beginning of each line
  3. Kill each process with the following command, substituting <PID> with the ID from the previous step (this will stop cryoSPARC from running)
    kill <PID>
    
  4. Re-run the command from step 2 to confirm that no cryoSPARC-related processes are running
  5. Restart cryoSPARC with cryosparcm start

Let me know how that goes