Please can you post additional information:
- the output of the command
and indicate the names of the lanes that function and malfunction, respectively.cryosparcm cli "get_scheduler_targets()"
- Are the project directories available (under matching paths) on the new GPU nodes?
- Post any messages related to the jobs’ submission failures from logs or the UI, such as from
- jobs’ Event Logs
- command_core log
cryosparcm log command_core
- slurm logs
Possibly unrelated to your immediate concern of jobs not being submitted:
Are you sure these modules are needed for (and do not interfere with) CryoSPARC jobs, as CryoSPARC v4.4 includes its own CUDA dependencies.
This code block may fail to properly assign/restrict GPU resources to jobs. For a more robust alternative, consider CUDA_ERROR_NO_DEVICE - but only when AF2 is running! - #9 by wtempel.