Installing Cryosparc on HPC

We have successfully installed (master-worker) nodes of cryosparc on HPC cluster,now we are facing issue to submit a job via pbs(job scheduler) on our HPC.We have followed all the steps as mentioned on guide CryoSPARC Installation Prerequisites - CryoSPARC Guide .

We are facing some challenges:

1.On our HPC system by default internet is not enabled, We have observed it verifies the license via your license server when we submit the job, it requires connectivity to cryosparc server.

Can we manage verification of license locally ?.

  1. We could run the job successfully using couple nodes as(worker nodes),now we need to integrate the cryosparc the pbs(job scheduler),So we are providing our cluster_script.sh. Please see the content of the cluster_script.sh as follows:

+++++++++++++++++++++++++++++++++++++++++++++++++++++++

#!/bin/sh
### Set the job name (for your reference)
#PBS -N cryosparc_{{ project_uid }}_{{ job_uid }}
### Set the project name, your department code by default
#PBS -P {{ project_uid }}
### Request email when job begins and ends
#PBS -m bea
### Specify email address to use for notification.
#PBS -M user_id
####
#PBS -l select=1:ncpus={{ num_cpu }}:ngpus={{ num_gpu }}
### Specify "wallclock time" required for this job, hhh:mm:ss
#PBS -l walltime=01:00:00
#PBS -o {{ job_log_path_abs }}
#PBS -e {{ job_log_path_abs }}
#PBS -l software=replace_with_Your_software_name
# After job starts, must goto working directory. 
# $PBS_O_WORKDIR is the directory from where the job is fired. 
echo "==============================="
echo $PBS_JOBID
cat $PBS_NODEFILE
echo "==============================="
cd $PBS_O_WORKDIR
#job 

{{ run_cmd }}     # Can you elaborate the {{run_cmd}}, it is calling which script or function?

+++++++++++++++++++++++++++++++++++++++++++++++++

  1. We are happy to share any other information with the cryosparc development team, regarding to our environment.

Our research work is totally stuck, we request your help in integrating cryosparc with pbs job scheduler.

We look forward your kind support as early as possible.

Looking for prompt response from cryosparc team.

Thanks,
Varun Jha
HPC Team,IITDelhi

Welcome to the forum @varun_jha.

The computer that serves as CryoSPARC master must be able to contact our license server. Licenses cannot be verified locally at your site.

In an actual cluster submission script, {{ run_cmd }} might (automatically) be interpreted as, for example,

/path/to/cryosparc_worker/bin/cryosparcw run --project P5 --job J20 --master_hostname node042.cluster --master_command_core_port 39002 > /path/to/cs_projects/CS-important-project/J20/job.log 2>&1

Thanks for the Reply !!
I have already installed cryoSPARC(master-node) configuration on set of nodes on our HPC cluster(Centos 7.9).

I would like to bring into your notice since I have upgraded the cryosparc v4.1.1 to cryosparcv4.1.2 whenever I try to start web application using command:

./cryosparcm start

We are facing issue with the starting the instances it gets stuck does not start beyond command_core and all other instances I need to start manually and forcefully.

Looking forward for prompt reply.

Thanks,
Varun
HPC Team,IIT Delhi

Please can you post more details:

  • how many CryoSPARC instances (CryoSPARC masters) are you planning to run
  • how did you install them? Please post the complete commands for installation and worker connect
  • error messages observed when you ran
    cryosparcm start
  • errors in the command_core log (please see guide about log inspection)

Thanks for prompt reply !!

I am attaching screenshot of all the commands I am running to start cryosparc web application:

Due to restrictions of posting one image at time, I am merging images and posting together of commands executed one by one as mentioned in the post

On one of the master node:

./cryosparcm start

On one same master nodes

./cryosparcm status

    • how many CryoSPARC instances (CryoSPARC masters) are you planning to run?

Initially for testing cryoSPARC we are planning to run one instances, once the test is successfully, we are looking for whole department so you can expect [3 - 4] instances running at the same time on login nodes(master) web engine, worker nodes (all the gpu compute nodes) in our HPC cluster.

  • how did you install them? Please post the complete commands for installation and worker connect

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

For master node:

export LICENSE_ID=XXXXXXXXXX

./install.sh --license $LICENSE_ID
–hostname
–dbpath /my_database/path/database
–port 49002
–insecure
–allowroot
–yes

For cryosparc worker nodes:

./install.sh --license $LICENSE_ID
–cudapath /home/soft/cuda-10.0.130
–yes

./cryosparcw connect --worker <my_master_node>
–master <my_worker_node>
–port 49002
–nossd
–lane fast_lanes

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

  1. errors in the command_core log

We require your immediate help in making cryoSPARC available to researchers.

Thanks for helping, I am grateful to entire cryoSPARC developers for making such a amazing tool for researchers.

Thanks,
Varun Jha
HPC Team,IIT Delhi

@varun_jha Please paste error messages and other textual information as text, not images.