Change cryosparc hostname

I know this has been covered in the past, but none of the solutions in the forum seem to be working. We got a system from a vendor with CryoSPARC v4.4.1 already installed. It is a standalone system with master/worker combined. During my configuration of the system, I changed the system hostname and now CryoSPARC is not happy. When trying to run a job, the event log complains:

Launching job on lane default target sn4622xxxxxx
Running job on remote worker node hostname sn4622xxxxxx
Failed to launch! 255
ssh: Could not resolve hostname sn4622xxxxxx: Name or service not known

I changed the hostname in CryoSPARC master config.h to match the hostname from the command ‘hostname-f’:
export CRYOSPARC_MASTER_HOSTNAME="mynewhostname.domain.emory.edu"

‘host $(hostname -f)’ resolves properly.
NOTE: host $(hostname) does NOT resolve, but instead yields:
Host mynewhostname not found: 3(NXDOMAIN)

The output of cryosparcm cli "get_scheduler_targets() results in this:

[{'cache_path': '/scr/cryosparc_cache', 'cache_quota_mb': None, 'cache_reserve_mb': 10000, 'desc': None, 'gpus': [{'id': 0, 'mem': 25425608704, 'name': 'NVIDIA RTX A5000'}, {'id': 1, 'mem': 25425608704, 'name': 'NVIDIA RTX A5000'}, {'id': 2, 'mem': 25425608704, 'name': 'NVIDIA RTX A5000'}, {'id': 3, 'mem': 25425608704, 'name': 'NVIDIA RTX A5000'}, {'id': 4, 'mem': 25425608704, 'name': 'NVIDIA RTX A5000'}, {'id': 5, 'mem': 25425608704, 'name': 'NVIDIA RTX A5000'}, {'id': 6, 'mem': 25425608704, 'name': 'NVIDIA RTX A5000'}, {'id': 7, 'mem': 25425608704, 'name': 'NVIDIA RTX A5000'}], 'hostname': 'sn4622xxxxxx', 'lane': 'default', 'monitor_port': None, 'name': 'sn4622xxxxxx', 'resource_fixed': {'SSD': True}, 'resource_slots': {'CPU': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127], 'GPU': [0, 1, 2, 3, 4, 5, 6, 7], 'RAM': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192]}, 'ssh_str': 'cryosparc_user@sn4622xxxxxx', 'title': 'Worker node sn4622xxxxxx', 'type': 'node', 'worker_bin_path': '/home/cryosparc_user/software/cryosparc/cryosparc_worker/bin/cryosparcw'}]

Where “sn4622xxxxxx” is the original hostname that the vendor assigned to the system.

Based on prior topics about changing the hostname, I found this, but when I try to run the command:
cryosparcm "remove_scheduler_target_node('sn4622xxxxxx')"
It complains:
Unknown cryoSPARC command remove_scheduler_target_node('sn4622xxxxxx')

WHY is it saying that is an unknown command? HELP! :slight_smile:

Try insert cli like so:

cryosparcm cli "remove_scheduler_target_node('sn4622xxxxxx')"

This command removes the existing, outdated worker record. You will also need to add a new record based on the new hostname using a command like

/home/cryosparc_user/software/cryosparc/cryosparc_worker/bin/cryosparcw connect \
  --master mynewhostname.domain.emory.edu \
  --worker mynewhostname.domain.emory.edu \
  --ssdpath /scr/cryosparc_cache \
  --port <base_port_number>

The command assumes the combined master/worker. Please review the parameters carefully. Aside from adjusting the hostnames, find the relevant port number by running the command

cryosparcm status | grep "^export CRYOSPARC_BASE_PORT"

DOH! Forgot the ‘cli’. I was just blindly following your advice from this post :slight_smile:New hostname and failed worker nodes – where it didn’t have the ‘cli’ in there! But I now see in THIS one Adding new hostname in user interface that you had it in there – I blame not enough coffee in thinking straight yet this morning! I think we’re good to go. I’ll check with the person that was trying to run the job. Mucho Thanks!

Yes, I think that solved THAT problem. But now there is another issue. I tried to run ‘cryosparcm test workers P1’, but that failed with “Error: [CUresult.CUDA_ERROR_LAUNCH_FAILED] Call to cuDevicePrimaryCtxRetain results in CUDA_ERROR_LAUNCH_FAILED”. I’ll open a new issue if I can’t figure that out.