Cryosparcm cluster connect error

Hello.

I’m trying to install cryosparc on a gpu system that is a single box with several gpus, shared by a research group. It runs a slurm scheduler.

I create my cluster_info.json and cluster_script.sh files, but the upon trying to invoke those with:

cryosparcm cluster connect

I get an error:

/scratch/opt/cryoem/cryosparc/2.2/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/requests/init.py:80: RequestsDependencyWarning: urllib3 (1.23) or chardet (3.0.4) doesn’t match a supported version!
RequestsDependencyWarning)
Traceback (most recent call last):
File “”, line 5, in
File “/scratch/opt/cryoem/cryosparc/2.2/cryosparc2_master/deps/anaconda/lib/python2.7/json/init.py”, line 291, in load
**kw)
File “/scratch/opt/cryoem/cryosparc/2.2/cryosparc2_master/deps/anaconda/lib/python2.7/json/init.py”, line 339, in loads
return _default_decoder.decode(s)
File “/scratch/opt/cryoem/cryosparc/2.2/cryosparc2_master/deps/anaconda/lib/python2.7/json/decoder.py”, line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File “/scratch/opt/cryoem/cryosparc/2.2/cryosparc2_master/deps/anaconda/lib/python2.7/json/decoder.py”, line 380, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Expecting , delimiter: line 6 column 5 (char 173)

Any help is appreciated!

I had a similar issue a couple of days ago. My issue was fixed by removing the trailing comma after the last item in my config. Not sure why the parser chokes on this, because that’s perfectly valid JSON and it is the way the example file is generated.

Edit: I guess the official spec doesn’t allow this, but the JSON parsers I’ve worked with are usually ok with.

Hi @yodamoppet ,

Did removing the trailing comma in the JSON file resolve the issue? I believe trailing commas are not part of the JSON specification, resulting in the parsing error.

- Suhail

Hi!

This was not the exact same issue as brevans, but it pointed me in the right direction. Thanks!

My file did not have a comma after the last config item, as was generated by the example. there was, however, a missing comma earlier in the file (also autogenerated by the example). Adding this gets me a bit further…

Running “cyrosparcm cluster connect”, I get the following. Note the error in the first line. At the end, however, we get success. Is the error errant, or is the success not really. How can I verify?

cryosparcm cluster connect
/scratch/opt/cryoem/cryosparc/2.2/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/requests/init.py:80: RequestsDependencyWarning: urllib3 (1.23) or chardet (3.0.4) doesn’t match a supported version!
RequestsDependencyWarning)
{
“lane”: “archer”,
“qdel_cmd_tpl”: “scancel {{ cluster_job_id }}”,
“name”: “archer”,
“title”: “archer”,
“hostname”: “archer”,
“qstat_cmd_tpl”: “squeue -j {{ cluster_job_id }}”,
“worker_bin_path”: “/opt/cryoem/cryosparc/2.2/cryosparc2_worker/bin/cryosparcw”,
“qinfo_cmd_tpl”: “sinfo”,
“qsub_cmd_tpl”: “sbatch {{ script_path_abs }}”,
“cache_path”: “/opt/cryoem/cryosparc/2.2/sparktemp”,
“cache_quota_mb”: null,
“script_tpl”: “#!/usr/bin/env bash\n#### cryoSPARC cluster submission script template for SLURM\n## Available variables:\n## {{ run_cmd }} - the complete command string to run the job\n## {{ num_cpu }} - the number of CPUs needed\n## {{ num_gpu }} - the number of GPUs needed. \n## Note: the code will use this many GPUs starting from dev id 0\n## the cluster scheduler or this script have the responsibility\n## of setting CUDA_VISIBLE_DEVICES so that the job code ends up\n## using the correct cluster-allocated GPUs.\n## {{ ram_gb }} - the amount of RAM needed in GB\n## {{ job_dir_abs }} - absolute path to the job directory\n## {{ project_dir_abs }} - absolute path to the project dir\n## {{ job_log_path_abs }} - absolute path to the log file for the job\n## {{ worker_bin_path }} - absolute path to the cryosparc worker command\n## {{ run_args }} - arguments to be passed to cryosparcw run\n## {{ project_uid }} - uid of the project\n## {{ job_uid }} - uid of the job\n## {{ job_creator }} - name of the user that created the job (may contain spaces)\n## {{ cryosparc_username }} - cryosparc username of the user that created the job (usually an email)\n##\n## What follows is a simple SLURM script:\n\n#SBATCH --job-name cryosparc_{{ project_uid }}_{{ job_uid }}\n#SBATCH -n {{ num_cpu }}\n#SBATCH --gres=gpu:{{ num_gpu }}\n#SBATCH -p gpu\n#SBATCH --mem={{ (ram_gb*1000)|int }}MB \n#SBATCH -o {{ job_dir_abs }}\n#SBATCH -e {{ job_dir_abs }}\n\navailable_devs=”"\nfor devidx in $(seq 0 15);\ndo\n if [[ -z $(nvidia-smi -i $devidx --query-compute-apps=pid --format=csv,noheader) ]] ; then\n if [[ -z “$available_devs” ]] ; then\n available_devs=$devidx\n else\n available_devs=$available_devs,$devidx\n fi\n fi\ndone\nexport CUDA_VISIBLE_DEVICES=$available_devs\n\n{{ run_cmd }}\n\n",
“cache_reserve_mb”: 10000,
“type”: “cluster”,
“send_cmd_tpl”: “{{ command }}”,
“desc”: null
}
Successfully added cluster lane archer