I am experiencing a problem since updating to Version 4. I am running the Master in the cloud which submits the job to the slurm scheduler in our HPC cluster. Everything was running fine under V3.3.
Now, when creating a new Project in the webinterface the Project directory is created with the name CS-{{ project_name }} instead of { project_uid }} (which was it before). However, the worker instance can not find the directory since it is looking for {{ project_uid }}. Is there a setting or a new cluster integration setup which I am missing to overcome this issue?
lets say I started J1 in project 30 with the Project name ‘Test’.
The master then creates the Project directory ‘CS-test’ at the given path /scratch/tmp/$USER/Projects/.
However, when scheduling the job to the worker I am getting the cryosparc error (from the worker instance) that it could not find the directory /scratch/tmp/$USER/Projects/P30/.
Which looks to me that 1. it is not looking for the correct directory and 2. that it is looking for the project_uid (in my case 30).
3. Both worker and master are the same version V4.0.3
4. Yes the directories are shared between master and all worker instances under the same path.
5. What exactly do you mean components? The installation or the hardware?
Also, I should note that I do not get errors in projects which have been created under version 3.3 before.
Best,
Max
Thank you for sending the job report. Please can you also post the output of cryosparcm cli "get_scheduler_targets()"
If {{ project_uid }} is currently used, directly or indirectly, in the definition of paths in your cluster script, please modify the underlying configuration templates using alternatives like {{ job_dir_abs }} or {{ project_dir_abs }}, then update your cluster connection.
Thank you for sending the job report. Please can you also post the output of cryosparcm cli "get_scheduler_targets()"
I have sent you the requested report. Thank you!
If {{ project_uid }} is currently used, directly or indirectly, in the definition of paths in your cluster script, please modify the underlying configuration templates using alternatives like {{ job_dir_abs }} or {{ project_dir_abs }}, then update your cluster connection.
Thank you for the suggestion! This does work for the new Projects which we created after the updated. However, it does not work for the old projects which we created prior the update.
Is there any way to force cryosparc master to create a project folder after {{ project_uid }}, as it did in the previous version, to harmonize our installation?
The only solution I see is to create cluster configurations for new (with {{ project_dir_abs }}) and old (with {{ project_uid }}) cryosparc projects.
Thank you very much for your help!
Hi wtempel, I may have misunderstood something but now it is working again! Thanks a lot!
The problem is fixed by using {{ job_dir_abs }}. The {{ project_dir_abs }} did not fix it.