A group of researchers would like to use cryoSPARC on our universities HPC cluster.
I’ve successfully installed cryoSPARC onto our CentOS 7 based HPC cluster with SGE / Gridengine scheduler.
According to the documentation I’ve created a new user and group “cryosparc” for it.
I’ve added the user accounts of the researchers into the cryosparc group, this way they can chgrp / chmod their files so cryoSPARC can read them. This works fine.
However the output files/folders which cryoSPARC produce are not group writable, only group readable. (umask 022, chmod 644 / 755)
So the researchers won’t be able to modify or delete any of the output files produced by cryoSPARC.
The default umask on our system is 002, so it seems like cryoSPARC intentionally changes it to 022.
Is it possible to change the default umask in cryoSPARC? Or make cryoSPARC not change the umask?
Another (ugly) possibility would be to give the research group access to the cryosparc user via sudo.
Other suggestions for properly implementing this kind of setup are welcome too.
And because I couldn’t find a proper SGE example, here are my working cluster_info.json and cluster_script.sh:
(setup is not in production yet so not 100% tested)
cluster_info.json:
{
"name" : "sgecluster",
"worker_bin_path" : "/software/cryosparc/cryosparc/cryosparc2_worker/bin/cryosparcw",
"cache_path" : "/usertmp/cryosparc",
"send_cmd_tpl" : "{{ command }}",
"qsub_cmd_tpl" : "qsub -terse {{ script_path_abs }}",
"qstat_cmd_tpl" : "qstat -j {{ cluster_job_id }}",
"qdel_cmd_tpl" : "qdel {{ cluster_job_id }}",
"qinfo_cmd_tpl" : "qstat"
}
cluster_script.sh
#!/usr/bin/env bash
#$ -N cryosparc_{{ project_uid }}_{{ job_uid }}
#$ -q gpu
#$ -pe smp {{ num_cpu }}
#$ -l ngpus={{ num_gpu }}
#$ -l h_vmem={{ ram_gb }}G
#$ -V
#$ -cwd
#$ -o {{ job_dir_abs }}
#$ -e {{ job_dir_abs }}
{{ run_cmd }}