Jobs issue after v4.6.2 updated

When I run 2D jobs, it gets stuck at the beginning.

License is valid.
Launching job on lane default target sn4622120284 …
Running job on master node hostname sn4622120284

I have tried to stop cryosparc and kill all supervisor processing. Then re-do the update with “cryosparcm update --override”

But the issue remains.

Please can you post

  • the outputs of these commands
    cryosparcm status | grep -v LICENSE
    cryosparcm cli "get_scheduler_targets()"
    cryosparcm log update | tail -n 40
    
  • the project UID (of the format P123 ) of a 2D classification job that is stuck
$ cryosparcm status | grep -v LICENSE; cryosparcm cli "get_scheduler_targets()"; cryosparcm log update | tail -n 40
----------------------------------------------------------------------------
CryoSPARC System master node installed at
/data/sw/cryosparc/cryosparc_master
Current cryoSPARC version: v4.6.2
----------------------------------------------------------------------------

CryoSPARC process status:

app                              RUNNING   pid 1539114, uptime 0:00:24
app_api                          RUNNING   pid 1539135, uptime 0:00:22
app_api_dev                      STOPPED   Not started
command_core                     RUNNING   pid 1539049, uptime 0:00:34
command_rtp                      RUNNING   pid 1539096, uptime 0:00:26
command_vis                      RUNNING   pid 1539082, uptime 0:00:28
database                         RUNNING   pid 1538938, uptime 0:00:37

----------------------------------------------------------------------------
License is valid
----------------------------------------------------------------------------

global config variables:
export CRYOSPARC_MASTER_HOSTNAME="sn4622120284"
export CRYOSPARC_DB_PATH="/data/sw/cryosparc/cryosparc_database"
export CRYOSPARC_BASE_PORT=39000
export CRYOSPARC_DB_CONNECTION_TIMEOUT_MS=20000
export CRYOSPARC_INSECURE=false
export CRYOSPARC_DB_ENABLE_AUTH=true
export CRYOSPARC_CLUSTER_JOB_MONITOR_INTERVAL=10
export CRYOSPARC_CLUSTER_JOB_MONITOR_MAX_RETRIES=1000000
export CRYOSPARC_PROJECT_DIR_PREFIX='CS-'
export CRYOSPARC_DEVELOP=false
export CRYOSPARC_CLICK_WRAP=true

[{'cache_path': '/scr/', 'cache_quota_mb': None, 'cache_reserve_mb': 10000, 'desc': None, 'gpus': [{'id': 0, 'mem': 25383469056, 'name': 'NVIDIA GeForce RTX 4090'}, {'id': 1, 'mem': 25386352640, 'name': 'NVIDIA GeForce RTX 4090'}], 'hostname': 'sn4622120284', 'lane': 'default', 'monitor_port': None, 'name': 'sn4622120284', 'resource_fixed': {'SSD': True}, 'resource_slots': {'CPU': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47], 'GPU': [0, 1], 'RAM': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]}, 'ssh_str': 'exx@sn4622120284', 'title': 'Worker node sn4622120284', 'type': 'node', 'worker_bin_path': '/data/sw/cryosparc/cryosparc_worker/bin/cryosparcw'}]
command_vis: started
command_rtp: started
    command_rtp connection succeeded
    command_rtp startup successful
app: started
app_api: started
-----------------------------------------------------

CryoSPARC master started. 
 From this machine, access CryoSPARC and CryoSPARC Live at
    http://localhost:39000

 From other machines on the network, access CryoSPARC and CryoSPARC Live at
    http://sn4622120284:39000


Startup can take several minutes. Point your browser to the address
and refresh until you see the CryoSPARC web interface.
 
 ===================================================
 Now updating worker nodes.
 ===================================================
 
All workers: 
sn4622120284 exx@sn4622120284
 -------------------------------------------------
Updating worker sn4622120284
Direct update
\cp -f ./cryosparc_worker.tar.gz /data/sw/cryosparc/cryosparc_worker
This is a forced update. To finish updating this worker node, log onto the node and run cryosparcw update --override 
 ---------------------------------------------------
 Done updating all worker nodes.
 If any nodes failed to update, you can manually update them.
 Cluster worker installations must be manually updated.
 
 To update manually, copy the cryosparc_worker.tar.gz file into the
 cryosparc worker installation directory, and then run 
    $ bin/cryosparcw update 
 from inside the worker installation directory.

The project UID is P3-W16-J525

Was the worker update completed as mentioned in

(note the distinction between cryosparcm and cryosparcw)
What are the outputs of these commands:

cat /data/sw/cryosparc/cryosparc_master/version
cat /data/sw/cryosparc/cryosparc_worker/version

?

cat cryosparc_master/version v4.6.2 cat cryosparc_worker/version
v4.6.2

\cp -f ./cryosparc_worker.tar.gz /data/sw/cryosparc/cryosparc_worker cryosparc_worker/bin/cryosparcw update --override
cryosparc_worker/bin/cryosparcw: line 67: config.sh: No such file or directory
ERROR: Is cryoSPARC installed? Run install.sh to complete installation. See https://guide.cryosparc.com for full instructions.

cryosparc_worker$ ./install.sh --license $LICENSE_ID

$cat cryosparc_master/version; cat cryosparc_worker/version
v4.6.2
v4.6.2

$ cryosparcm restart

I think I just missed the step to run “cryosparcw update --override”
After I finished all steps above, the issue is fixed and the job is running well.