cryoSPARC tools HPC submission with costum variables

Dear cryoSparc team,

Our lab is exploring the cryosparc-tools and we are enjoying it very much!
Here some background information to our CS setup. We are running CS (v4.4.1) on an HPC. Jobs are submitted via slurm.
I am trying to somewhat automatize some of my workflows and ran into the following problem.
I am able to create jobs and submit them to the cluster using the default parameters without a problem. We have two costum variable that can be defined during submission (at least in the GUI). These are cbe_time to reserve calculation time (standard 8 hours) and cbe_ram_multiplier to overwrite the RAM inquiry calculated by the CS-worker. The latter we introduced as many of our calculations will run out of memory with the standard inquiry.
For my project I have to overwrite both of these parameters when submitting via python on the “CBE” lane, but I cannot find a way to do it.
Available lanes:
cryosparc.get_lanes() [{'desc': '', 'name': 'Workstations', 'title': 'Workstations', 'type': 'node'}, {'desc': '', 'name': 'CBE', 'title': 'Lane CBE (cluster)', 'type': 'cluster'}]
Targets of the CBE lan
cs.get_targets('CBE') [{'cache_path': '/scratch-cbe/users/svc_cryosparc/lane_cache_CBE', 'cache_quota_mb': 15000000, 'cache_reserve_mb': 10000, 'custom_var_names': ['cbe_time', 'cbe_ram_multiplier'], 'custom_vars': {}, 'desc': None, 'hostname': 'CBE', 'lane': 'CBE', 'name': 'CBE', 'qdel_cmd_tpl': 'scancel {{ cluster_job_id }}', 'qinfo_cmd_tpl': 'sinfo', 'qstat_cmd_tpl': 'squeue -j {{ cluster_job_id }}', 'qstat_code_cmd_tpl': None, 'qsub_cmd_tpl': 'sbatch --wckey=cryosparc {{ script_path_abs }}', 'script_tpl': '#!/usr/bin/env bash\n##\n## provide this information together with a link to this website in Helpdesk tickets at: http://support.vbc.ac.at/"\n## ===========================================\n## Cryosparc project: {{ project_uid }} {{ job_uid }}\n## Cryosparc user: {{ cryosparc_username }}\n## Cryosparc job dir: {{ job_dir_abs }}\n## Cryosparc project dir: {{ project_dir_abs }}\n## Cryosparc job log: {{ job_log_path_abs }}\n## ===========================================\n##\n#### cryoSPARC cluster submission script template for SLURM\n## variables via: https://guide.cryosparc.com/setup-configuration-and-management/how-to-download-install-and-configure/downloading-and-installing-cryosparc#connect-a-managed-worker-to-cryosparc\n## Available variables:\n## run_cmd: {{ run_cmd }} - the complete command string to run the job\n## num_cpu: {{ num_cpu }} - the number of CPUs needed\n## num_gpu: {{ num_gpu }} - the number of GPUs needed.\n## Note: the code will use this many GPUs starting from dev id 0\n## the cluster scheduler or this script have the responsibility\n## of setting CUDA_VISIBLE_DEVICES so that the job code ends up\n## using the correct cluster-allocated GPUs.\n## ram_gb: {{ ram_gb }} - the amount of RAM needed in GB\n## job_dir_abs: {{ job_dir_abs }} - absolute path to the job directory\n## project_dir_abs: {{ project_dir_abs }} - absolute path to the project dir\n## job_log_path_abs: {{ job_log_path_abs }} - absolute path to the log file for the job\n## worker_bin_path: {{ worker_bin_path }} - absolute path to the cryosparc worker command\n## run_args: {{ run_args }} - arguments to be passed to cryosparcw run\n## project_uid: {{ project_uid }} - uid of the project\n## job_uid: {{ job_uid }} - uid of the job\n## job_creator: {{ job_creator }} - name of the user that created the job (may contain spaces)\n## cryosparc_username: {{ cryosparc_username }} - cryosparc username of the user that created the job (usually an email)\n## job_type: {{ job_type }} - cryosparc job type\n## defined in /cryosparc_master/cryosparc_compute/jobs/jobregister.py\n##\n## ====== parameters ======\n## \'cbe_time\': \n## - \'auto\': select based on job_type(TBD), currently: 8h\n## - \'short\', \'medium\', \'long\' - to select max run duration for that QOS (8h, 48h, 168h)\n## - max runtime in MINUTES\n## \'cbe_ram_multiplier\': \n## - default = 1.0\n##\n## inputs:\n## cbe_time={{ cbe_time }}\n## cbe_ram_multiplier={{ cbe_ram_multiplier }}\n{# define qos_limit dict #}\n{%- set job = namespace(found=false) %}\n{%- set job.qos = None %}\n{%- set job.time = cbe_time %}\n{%- set qos_limit = ({"short": 480, "medium": 2880, "long": 20160}) %}\n## configured qos_limit: {{ qos_limit }}\n{% if cbe_time == \'auto\' %}\n## cbe_time auto select -> short, or based on job type: {{ job_type }}\n{% set job.qos = \'short\' %}\n{% set job.time = qos_limit[job.qos] %}\n## auto time select: qos={{ job.qos }} for job_type: {{ job_type }}\n{% elif qos_limit[cbe_time] %}\n## cbe_time is a preset: {{ cbe_time }} -> ({{ qos_limit.keys() }})\n{%- set job.qos = cbe_time %}\n{%- set job.time = qos_limit[job.qos] %}\n{% else %}\n## cbe_time is not a preset value, expecting number in minutes\n{%- for k,v in qos_limit | dictsort(false, \'value\') %}\n{#- ## {{ k }}: {{ v }} #}\n{%- if cbe_time | int < v and job.qos not in qos_limit.keys() %}\n{%- set job.qos = k %}\n## selecting qos: {{ job.qos }}\n{%- endif %}\n{%- endfor %}\n{%- endif %}\n## cbe_time: {{ job.time }}\n## qos: {{ job.qos }}\n##\n## What follows is a simple SLURM script:\n#SBATCH --job-name cs_{{ cryosparc_username.split("@")[0] }}_{{ project_uid }}_{{ job_uid }}_{{ job_type }}\n#SBATCH --ntasks=1\n#SBATCH --cpus-per-task={% if num_cpu > 38 %}38{% else %}{{ num_cpu }}{% endif%}\n#SBATCH --gres=gpu:{{ num_gpu }}\n#SBATCH --partition {% if num_gpu > 0 %}g{% else %}c{% endif %}\n#SBATCH --qos {{ job.qos }}\n#SBATCH --time {{ job.time }}\n#SBATCH --mem={{ (ram_gb * 1000 * cbe_ram_multiplier | float) | round | int }}MB\n#SBATCH -o {{ job_dir_abs }}/{{ project_uid }}_{{ job_uid }}.out\n#SBATCH -e {{ job_dir_abs }}/{{ project_uid }}_{{ job_uid }}.err\n#SBATCH -A {{ \'hpc\' if (cryosparc_username == \'cryosparc_username\') else job_creator.split("@")[1] }}\n#SBATCH --comment \'cbe_time={{ cbe_time }};cbe_ram_multiplier={{ cbe_ram_multiplier }};qos={{ job.qos }};job_type={{ job_type }};\'\n\n\necho "starting Cryosparc work ..."\n\n{{ run_cmd }}\n\n#\n# provide the output below for servicedesk requests: [http://support.vbc.ac.at\n#\ncat](http://support.vbc.ac.at/n#\ncat) << EOF | tee "{{ job_dir_abs }}/jobinfo.txt"\nCryosparc job resource consumption summary:\n\ncryosparc project/job: {{ project_uid }} {{ job_uid }}\nslurm jobid: $SLURM_JOBID\ncryosparc user: {{ cryosparc_username }}\njob_dir: {{ job_dir_abs }}\nproject_dir: {{ project_dir_abs }}\njob_log: {{ job_log_path_abs }}\n\njob_type: {{ job_type }}\ncbe_time: {{ cbe_time }}\ncbe_ram_multiplier: {{ cbe_ram_multiplier }}\ncryosparc ram_gb: {{ ram_gb }}\ntime_limit: {{ job.time }}\nqos: {{ job.qos }}\n\n# =========================================== \n\n$(jobinfo $SLURM_JOBID)\n\n# =========================================== \n\n$(scontrol show job $SLURM_JOBID)\n\n# =========================================== \n\nEOF\n\n\n', 'send_cmd_tpl': 'ssh cbe.vbc.ac.at {{ command }} ', 'title': 'CBE', 'tpl_vars': ['cbe_time', 'num_cpu', 'run_cmd', 'run_args', 'num_gpu', 'ram_gb', 'job_creator', 'project_uid', 'job_dir_abs', 'cluster_job_id', 'project_dir_abs', 'worker_bin_path', 'job_type', 'cryosparc_username', 'job_uid', 'job_log_path_abs', 'cbe_ram_multiplier', 'command'], 'type': 'cluster', 'worker_bin_path': '~/cryosparc_worker/bin/cryosparcw'}]

Any help would be greatly appreciated!

Best,
Lorenz

@DerLorenz You can modify job-specific variable values with the lower-level cli interface, as described in Set cluster submission script variables in cryosparc-tools - #2 by nfrasser.

1 Like