We would like each researcher to have their own master instance while still sharing the cluster.
This is because researchers cannot share the directory on HPC for their research data.
Currently, we have one cluster and one CryoSPARC installation.
We have Non-Commercial (academic) License.
This is for academic use only.
1.I am wondering if it is possible to have multiple master nodes to share a single cluster?
2.Is it permissible to use a single license across multiple master instances, or do we need a separate license for each instance?
3.If the license on the cluster is different than the licenses on the instances, would this cause any issues?
We had this exact setup for a while. I would not recommend it to be honest, it turned into a management nightmare. That said if it’s your only option, it is doable.
The answer here is yes, but you will need to increment the port each time it’s installed. I wound up writing an installation script to do this automatically, and kept track of what port numbers were in use, and by whom, just in a text file in a shared directory. The script checked the file before installing CryoSPARC, and automatically entered the relevant data when CryoSPARC did it.
Someone from CryoSPARC will have to answer this, but we had every group get their own license.
Each group will have to add their own cluster lanes (you can put this into your install script), in which case, you can set the license there.
We wound up hosting a single CryoSPARC install instead, in a VM, that could submit jobs to SLURM, and created a shared account (our university uses Microsoft Active Directory) that only the EM facility manager and IT have access to. Then, we gave that shared user access to the SMB shares and cache directories of each group. We also added lanes for each group, and used CryoSPARC’s lane management to restrict user access so they can only use their own lanes.
There’s still some downsides - technically, users could browse other users data through the CryoSPARC file selector, but for all our groups, the convenience was worth that “risk”.
Thank you for your reply!
I have a follow-up question regarding the license. I understand that the worker instance requires a license key during installation. With the update to v4, how can multiple master instances with unique license keys access one shared worker instance on an HPC cluster using Slurm? Is it achieved by updating the license key on the sbatch script?