Gctf GPU resource

solved

#1

Dear CryoSparc developers,

It seems that the current gctf module always use --gid 0 option in the run. I’ve tried to change it in the additional command line to assign it to other gpus, but it does not work.

Could you advise on how to fix this issue? It appears that running gctf job on occupied GPU causes some strange outputs.

Thanks,
Wei


#2

Hi @wxh180,

The cryoSPARC scheduler will automatically assign a job GPU resources based on what it sees is available internally. For example if there are any external applications using a GPU, the scheduler will not be able to avoid scheduling another job on the same GPU. One way you can avoid that is by selectively choosing which GPUs cryoSPARC can schedule jobs to. You can do this by logging onto the worker node that you want to modify (if you are using a standalone instance its the same machine as your master node) and running the command:

bin/cryosparcw connect --worker <worker_hostname> --master <master_hostname> --port <port_number> --update --gpus <comma-seperated list of GPU id's to use>

See https://cryosparc.com/docs/reference/install/ for more information:

bin/cryosparcw connect
  --worker <worker_hostname>
  --master <master_hostname>
  --port <port_num>
  [--update]                       : update an existing worker configuration
  [--sshstr <custom_ssh_string>]   : custom ssh connection string like user@hostname
  [--nogpu]                        : connect worker with no GPUs
  [--gpus 0,1,2,3]                 : enable specific GPU devices only
  [--nossd]                        : connect worker with no SSD
  [--ssdpath <ssd_path> ]          : path to directory on local ssd
  [--ssdquota <ssd_quota_mb> ]     : quota of how much SSD space to use (MB)
  [--ssdreserve <ssd_reserve_mb> ] : minimum free space to leave on SSD (MB)
  [--lane <lane_name>]             : name of lane to add worker to
  [--newlane]                      : force creation of a new lane if specified lane does not exist

Hopefully this workaround is suitable until the next release.