Just updated to v4.1.1 (complete nightmare). In the end had to completely wipe and reinstall cryosparc, then manually create the user account because, despite all the information being in the install.sh file, no users were generated. Then had to deal with the command_core not launching, which was eventually fixed. I don’t know if these issue are related to the volume tool issue described below, but it wouldn’t surprise me.
Trying to run the Vol Tools utility, upon queuing the job simply hangs. The event logs shows only the following:
License is valid.
Launching job on lane default target spgpu …
Running job on master node hostname spgpu
And then it sits doing nothing. The metadata shows more going on before it also hangs. These are the last several lines:
“type”: “volume_tools”,
“version”: “v4.1.1”,
“ui_layouts”: {
“P24”: {
“show”: true,
“floater”: false,
“top”: 232,
“left”: 1788,
“width”: 152,
“height”: 192,
“groups”: []
},
“P24W3”: {
“show”: true,
“floater”: false,
“top”: 232,
“left”: 1344,
“width”: 152,
“height”: 192,
“groups”: []
}
},
“no_check_inputs_ready”: false,
“queued_to_gpu”: false,
“queued_to_hostname”: null,
“num_tokens”: 0
}
Earlier in the metadata I found this error message:
“queue_message”: “[Errno 2] No such file or directory: ‘/home/spuser/software/cryosparc/cryosparc2_worker/bin/cryosparcw’”,
But the job runs a bit further, so I assume that it finds someway around the error that it is looking for the cryosparcw executable in the wrong (legacy) directory. Compared to running a job such as “Import Volume”, there is no “Job J### Started” line in the event log. In the metadata for “Import Volume” there is no call to cryosparcw
Install is a single workstation
Version 4.1.1 (confirmed by cryosparcm status)
All appropriate processes are running