Job cannot run on worker in another workstation

Hi,
My cryosparc (single-workstation) was installed in Platanus workstation. Since we recently have built another workstation, Salix, I was hoping to run my jobs also in Salix to reduce the crowds in Platanus.
I was then installed a worker version on Salix and connected to Platanus. With this, when I launched a job in Platanus, I could see the GPUs from Salix. And I tried to run a job using a GPU from Salix. It didn’t run.
There were no errors.

License is valid.
Launching job on lane default target salix..
Running job on remote worker node hostname salix..
(It stopped here. The status was “Launched”, but never turn “Running”)

Best,
Ming

One possible explanation is that Salix cannot access the project directory at the expected path.
What are the project and job uids of the launched, but not running, job?

It’s a local refinement job, P11 J98.

@Ming99
Please can you post the outputs of these commands, run on Platanus:

cryosparcm cli "get_scheduler_targets()"
cs_project_dir=$(cryosparcm cli "get_project_dir_abs('P11')")
ssh Salix "hostname && stat -f $cs_project_dir"
[{'cache_path': '/ssd/cryosparc2_cache', 'cache_quota_mb': None, 'cache_reserve_mb': 10000, 'desc': None, 'gpus': [{'id': 0, 'mem': 11546394624, 'name': 'NVIDIA GeForce RTX 2080 Ti'}, {'id': 1, 'mem': 11546394624, 'name': 'NVIDIA GeForce RTX 2080 Ti'}, {'id': 2, 'mem': 11546394624, 'name': 'NVIDIA GeForce RTX 2080 Ti'}, {'id': 3, 'mem': 11546394624, 'name': 'NVIDIA GeForce RTX 2080 Ti'}, {'id': 4, 'mem': 11546394624, 'name': 'NVIDIA GeForce RTX 2080 Ti'}, {'id': 5, 'mem': 11546394624, 'name': 'NVIDIA GeForce RTX 2080 Ti'}, {'id': 6, 'mem': 11546394624, 'name': 'NVIDIA GeForce RTX 2080 Ti'}, {'id': 7, 'mem': 11546394624, 'name': 'NVIDIA GeForce RTX 2080 Ti'}], 'hostname': 'platanus', 'lane': 'default', 'monitor_port': None, 'name': 'platanus', 'resource_fixed': {'SSD': True}, 'resource_slots': {'CPU': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39], 'GPU': [0, 1, 2, 3, 4, 5, 6, 7], 'RAM': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]}, 'ssh_str': '*@platanus', 'title': 'Worker node platanus', 'type': 'node', 'worker_bin_path': '/data/*/*/software/cryosparc/cryosparc2_worker/bin/cryosparcw'}, {'cache_path': '/ssd/cryosparc2_cache', 'cache_quota_mb': None, 'cache_reserve_mb': 10000, 'desc': None, 'gpus': [{'id': 0, 'mem': 11546394624, 'name': 'NVIDIA GeForce RTX 2080 Ti'}, {'id': 1, 'mem': 11546394624, 'name': 'NVIDIA GeForce RTX 2080 Ti'}, {'id': 2, 'mem': 11546394624, 'name': 'NVIDIA GeForce RTX 2080 Ti'}, {'id': 3, 'mem': 11546394624, 'name': 'NVIDIA GeForce RTX 2080 Ti'}, {'id': 4, 'mem': 11546394624, 'name': 'NVIDIA GeForce RTX 2080 Ti'}, {'id': 5, 'mem': 11546394624, 'name': 'NVIDIA GeForce RTX 2080 Ti'}, {'id': 6, 'mem': 11546394624, 'name': 'NVIDIA GeForce RTX 2080 Ti'}, {'id': 7, 'mem': 11546394624, 'name': 'NVIDIA GeForce RTX 2080 Ti'}], 'hostname': 'platanus.*.*.*', 'lane': 'default', 'monitor_port': None, 'name': 'platanus.*.*.*', 'resource_fixed': {'SSD': True}, 'resource_slots': {'CPU': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39], 'GPU': [0, 1, 2, 3, 4, 5, 6, 7], 'RAM': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]}, 'ssh_str': '*@platanus.*.*.*', 'title': 'Worker node platanus.*.*.*', 'type': 'node', 'worker_bin_path': '/data/*/*/software/cryosparc/cryosparc2_worker/bin/cryosparcw'}, {'cache_path': '/ssd/*', 'cache_quota_mb': None, 'cache_reserve_mb': 10000, 'desc': None, 'gpus': [{'id': 0, 'mem': 25438126080, 'name': 'NVIDIA GeForce RTX 3090'}, {'id': 1, 'mem': 25438126080, 'name': 'NVIDIA GeForce RTX 3090'}, {'id': 2, 'mem': 25438126080, 'name': 'NVIDIA GeForce RTX 3090'}, {'id': 3, 'mem': 25438126080, 'name': 'NVIDIA GeForce RTX 3090'}, {'id': 4, 'mem': 25438126080, 'name': 'NVIDIA GeForce RTX 3090'}, {'id': 5, 'mem': 25438126080, 'name': 'NVIDIA GeForce RTX 3090'}, {'id': 6, 'mem': 25438126080, 'name': 'NVIDIA GeForce RTX 3090'}, {'id': 7, 'mem': 25438126080, 'name': 'NVIDIA GeForce RTX 3090'}], 'hostname': 'salix.*.*.*', 'lane': 'default', 'monitor_port': None, 'name': 'salix.*.*.*', 'resource_fixed': {'SSD': True}, 'resource_slots': {'CPU': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95], 'GPU': [0, 1, 2, 3, 4, 5, 6, 7], 'RAM': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]}, 'ssh_str': '*@salix.*.*.*', 'title': 'Worker node salix.*.*.*', 'type': 'node', 'worker_bin_path': '/data/*/*/cryosparc_worker/bin/cryosparcw'}]

ssh: Could not resolve hostname salix: Name or service not known

Please can you re-run the command on platanus

ssh Salix "hostname && stat -f $cs_project_dir"

replacing Salix with whatever the "hostname": value in the get_scheduler_targets() output.

[Edited for correction and clarity]

ssh: Could not resolve hostname [{cache_path:: Name or service not known

I meant replace Salix in the second command of this sequence

with the actual hostname that is concealed in your earlier post:

The purpose of the command is to confirm that the project directory is correctly shared between the platanus and salix hosts.

salix
stat: cannot read file system information for '/home/*/CS-*': No such file or directory

… indicates that your project directory is not correctly shared with host salix. Correct sharing of project directories is a CryoSPARC prerequisite.
I also noticed in the get_scheduler_targets() output duplicate entries for platanus, one under "hostname": "platanus", another under "platanus.*.*.*". If this apparent duplication causes problems, you can remove an unwanted target with the remove_scheduler_target_node() cli function (details).