After upgrading to version 4.0.3, we have encountered some problems with launching jobs on our cluster.
We ran a system where we had a number of masters (different users and a master facility account) (with unique Licence_IDs) communicating with a worker sitting on the cluster with a licence key matched to the master facility account. This worked great offering strong flexibility to access the cluster via Slurm for fair usage and processing etc while allowing users to administer their projects and accounts.
After the upgrade we can no long launch jobs due to licence id mismatch. Is there a work around for this or a way to disable the check?
A member of our team shared these ideas:
Remove the line export CRYOSPARC_LICENSE_ID=... from /path/to/cryosparc_worker/config.sh if that line exists, then
Alternative A (applies to instances where all lanes are cluster lanes), for each master instance:
In the the cluster_script.sh template, insert just above the line that includes {{ run_cmd }} a line with: export CRYOSPARC_LICENSE_ID=...
matching the corresponding /path/to/cryosparc_master/config.sh value.
Alternative B (also applies to non-cluster worker installations), for each master instance:
Create a script /path/to/custom/cryosparcw (accessible on workers):