Our cryoSPARC is surely running, I can execute jobs correctly, but cryosparcm status command says not.
> cryosparcm status
----------------------------------------------------------------------------
CryoSPARC System master node installed at
/data2/cryosparcuser/cryosparc_master
Current cryoSPARC version: v3.2.0+210831
----------------------------------------------------------------------------
CryoSPARC is not running.
----------------------------------------------------------------------------
This usually happens when the .sock file that supervisord uses to keep track of cryosparc processes is deleted. This file lives in the /tmp folder, so if that folder was wiped out for any reason (quite common), you’ll see this happen.
Theres no way to reconnect to the living cryoSPARC instance, so the next thing to do is to kill off the running, zombied cryoSPARC instance, and start a fresh one.
ps -ax | grep “supervisord” (kill only the process that is running from your cryosparc install)
ps -ax | grep “cryosparc” (kill all the matching processes related to your cryosparc instance)
ps -ax | grep “mongod” (kill only the process running your cryosparc database)
e.g. kill 82681
Yes, there is, using the CRYOSPARC_SUPERVISOR_SOCK_FILE environment variable.
The default value is /tmp/cryosparc-supervisor-$root_dir_hash.sock
where root_dir_hash is $(echo -n $CRYOSPARC_ROOT_DIR | md5sum | awk '{print $1}').
You can append this variable to cryosparc_master/config.sh. Once you’ve saved the file, restart cryoSPARC: cryosparcm stop then cryosparcm start
@stephan
Is there any risk in keep using the zombied (by deletion of the sock file) cryoSPARC instance?
This situation is hard to recognize, as I hadn’t noticed this until I manually executed cryosparcm status command.
You won’t run into any problems unless someone tries to turn on cryoSPARC- this will create two instances that are trying to make changes to the same database (which might become a bigger problem).
Also, you won’t be able to update the instance.