How to configure/modify the GPUs used by each worker node - cryoSPARC v2+

is there a way to set a list of GPUs to use in v2?

we share resources on the servers and can only use certain gpus ( 0 and 1 are reserved for visualization, and 2 and 3 for crunching)

it used to be:

> cryosparc configure gpu list

Detected 4 CUDA devices.

enabled    id           pci-bus  name
---------------------------------------------------------------
            0      0000:04:00.0  Tesla K80
            1      0000:05:00.0  Tesla K80
      x     2      0000:84:00.0  Tesla K80
      x     3      0000:85:00.0  Tesla K80
1 Like

Hi Martin,

The instructions to specify which GPUs to run cryoSPARC jobs on are located in the install docs.

You can reconnect using the --update flag and specify a comma delimited list of GPUs corresponding to the id from cryosparc configure gpu list:

bin/cryosparcw connect
  --worker <worker_hostname>
  --master <master_hostname>
  --port <port_num>
  --update
  --gpus 0,1,2,3

Hope this helps!

- Suhail

1 Like

Hi Suhail,

thanks a lot for the help!

to save some time searching here’s the part of the install docs for updating worker conf:

For advanced configuration:
cd <install_path>/cryosparc2_worker
bin/cryosparcw gpulist
This will list the available GPUs on the worker node, and their corresponding numbers. Use this list to decide which GPUs you wish to enable using the --gpus flag below, or leave this flag out to enable all GPUs.
Use advanced options with the connect command, or use the --update flag to update an existing configuration:
bin/cryosparcw connect
  --worker <worker_hostname>
  --master <master_hostname>
  --port <port_num>
  [--update]                       : update an existing worker configuration
  [--sshstr <custom_ssh_string>]   : custom ssh connection string
                                     like user@hostname
  [--nogpu]                        : connect worker with no GPUs
  [--gpus 0,1,2,3]                 : enable specific GPU devices only
  [--nossd]                        : connect worker with no SSD
  [--ssdpath <ssd_path> ]          : path to directory on local ssd
  [--ssdquota <ssd_quota_mb> ]     : quota of how much SSD space to use (MB)
  [--ssdreserve <ssd_reserve_mb> ] : minimum free space to leave on SSD (MB)
  [--lane <lane_name>]             : name of lane to add worker to
  [--newlane]                      : force creation of a new lane if
                                     specified lane does not exist
1 Like

Hi,

If you have a stand alone install, are --worker and --master the same? I wish to limit the GPUs accessable to Cryosparc2 and if I use /bin/cryosparcw it requires both a master and worker. Is this the host name? Is there a different way I should be doing this with a standalone installation? I appreciate anyones input.

Regards
Christine

Hi, I tried with the same host name for both master and worker to restrict the GPUS to our titans and not the quadro which is in the unit, and it seemed to work, but I still see all five GPUs in gpulist. Is there a command to see which ones it will use?

Hi @geec,

The master and worker are two components of cryoSPARC, both of which are installed on a single machine if you use the --standalone flag. You will notice in the installation instructions for a single workstation that you need to download both the master and the worker and reference the worker path in the install with the --worker_path flag.

Once you have installed cryoSPARC, you can simply follow the posts above to identify available GPUs on the machine (cryosparcw gpulist) and update the list of GPUs for cryoSPARC to use.

- Suhail

Hi Suhail,

Thanks for your reply. I believe I updated the list to use as per the instructions you gave before I asked the question, I was wondering if there is a way to output which ones it is using after I have configured it, since gpulist just lists them all still.

Regards
Christine

Hi Christine,

Sorry, I misunderstood your question. To get a list of GPUs that have been configured for cryoSPARC to use, run the following command:

cryosparcm icli

Then, in the python shell enter:

{t['hostname']:t['resource_slots']['GPU'] for t in cli.get_scheduler_targets()}

You should get an output similar to this:

{
  u'<node1>': [0, 1, 2],
  u'<node2>': [0, 1, 2, 3],
  ...
}

These correspond to the GPU IDs that are outputted in the gpulist command.

Regards,
Suhail

1 Like