We are trying to get a CryoSPARC v3.2.0 working with HTCondor 8.9.x as the way to submit and run jobs.
We’ve setup the cluster_info.json and are able to submit jobs via HTCondor but are not seeing the information with CryoSPARC for the CPUs, RAM, GPUs available in the Lane.
The qinfo_cmd_tmp
appears to be where we can define a command such as calling condor_status
to pull such information and provide as a string.
The examples ( https://guide.cryosparc.com/setup-configuration-and-management/how-to-download-install-and-configure/cryosparc-cluster-integration-script-examples ) describe SLURM, UGE, PBS commands for the qinfo_cmd_tpl
and the example for SLURM and sinfo would have these items:
"qinfo_cmd_tpl": "sinfo --format='%.8N %.6D %.10P %.6T %.14C %.5c %.6z %.7m %.7G %.9d %20E'",
- Ref (https://slurm.schedmd.com/sinfo.html)
- node names (%N)
- number of nodes (%D)
- partition name (%P)
- state of nodes (%T), extended form
- number of CPUs (%C)
- number of CPUs pr node (%c)
- extended processor information (%z), number of sockets, cores, threads (S:C:T) per node
- size of memory per node in megabytes (%m)
- generic resources (%G)
- size of temporary disk space in megabytes (%d)
Can you also provide an example formatted string of output that we could match with a wrapped script to provide each of these items? Is this a single line string with each of the values?
Thanks!