cryoSPARC Live only with legacy version port 39007 available but not 39006

Hello everyone,

Recently we set up a new standalone workstation for cryoSPARC v3.1.0. After the installation, everything works fine (with default port number 39000-39010) without any errors but the NEW cryoSPARC Live web application. I can only access LEGACY cryoSPARC Live application with port 39007 (from my own laptop by ‘ssh -N -f -L localhost:39007:localhost:39007’), but I can’t open the new cryoSPARC Live application with port 39006.

‘cryosparcm status’ shows every process except XXXX_dev is ‘RUNNING’. But I found ‘netstat -tuplen | grep :3900’ shows only port 39000, 39001, 39002, 39003, 39005, 39007 are in LISTEN state.

I’m wondering why cryoSPARC can’t open port 39006 in LISTEN state. I’ve checked there were no any other applications using port 39006 before the installation.

Any suggestions would be helpful.

Best regards,
An Mu

Hi @peteranmu,

Apologies for the delayed response. Are you still encountering this error? The output of cryosparcm status should denote whether certain processes are running. Please ensure that app is RUNNING.

Hi @sdawood ,

Thanks for replying. Yes, I’m still encountering this error, even after I updated cryoSPARC to latest version 3.2.0.

The output of cryosparcm status showed that every process including app except dev version is RUNNING:

Current cryoSPARC version: v3.2.0

CryoSPARC process status:

app RUNNING pid 15760, uptime 0:00:11
app_dev STOPPED Not started
command_core RUNNING pid 11150, uptime 0:10:08
command_rtp RUNNING pid 16196, uptime 0:09:57
command_vis RUNNING pid 13847, uptime 0:09:59
database RUNNING pid 11068, uptime 0:10:13
liveapp RUNNING pid 1472, uptime 0:09:41
liveapp_dev STOPPED Not started
webapp RUNNING pid 1426, uptime 0:09:45
webapp_dev STOPPED Not started


And netstat -nlptu | grep 3900 showed that other 3900* ports are open and in LISTEN state:

tcp 0 0 0.0.0.0:39005 0.0.0.0:* LISTEN 16196/python
tcp 0 0 0.0.0.0:39007 0.0.0.0:* LISTEN 1472/node
tcp 0 0 0.0.0.0:39000 0.0.0.0:* LISTEN 1426/node
tcp 0 0 0.0.0.0:39001 0.0.0.0:* LISTEN 11068/mongod
tcp 0 0 0.0.0.0:39002 0.0.0.0:* LISTEN 11150/python
tcp 0 0 0.0.0.0:39003 0.0.0.0:* LISTEN 13847/python

Best regards,
An Mu

Hi @sdawood,

I found a weird thing just now. Everytime I run cryosparcm status, the app showed a new pid, and uptime was only few seconds. Is this a possible reason of the error?


CryoSPARC process status:

app RUNNING pid 11146, uptime 0:00:06
app_dev STOPPED Not started
command_core RUNNING pid 11150, uptime 2:13:11
command_rtp RUNNING pid 16196, uptime 2:13:00
command_vis RUNNING pid 13847, uptime 2:13:02
database RUNNING pid 11068, uptime 2:13:16
liveapp RUNNING pid 1472, uptime 2:12:44
liveapp_dev STOPPED Not started
webapp RUNNING pid 1426, uptime 2:12:48
webapp_dev STOPPED Not started


Best regards,
An Mu

Hi @peteranmu,

Generally all processes should have around the same uptime so this suggests an error in the Live application process. Could you please reply with the output of the following command:

cryosparcm log app

Hi @sdawood,

This is what cryosparcm log app shows every few seconds:

MongoTimeoutError: Server selection timed out after 10000 ms
at Timeout._onTimeout (/home/cryosparc_user/cryosparc_instance/cryosparc_master/cryosparc_app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_mod
ules/mongodb-core/lib/sdam/topology.js:773:16)
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7) {
[Symbol(mongoErrorContextSymbol)]: {}
}
(node:28055) Warning: Accessing non-existent property ‘count’ of module exports inside circular dependency
(Use node --trace-warnings ... to show where the warning was created)
(node:28055) Warning: Accessing non-existent property ‘findOne’ of module exports inside circular dependency
(node:28055) Warning: Accessing non-existent property ‘remove’ of module exports inside circular dependency
(node:28055) Warning: Accessing non-existent property ‘updateOne’ of module exports inside circular dependency
/home/cryosparc_user/cryosparc_instance/cryosparc_master/cryosparc_app/bundle/programs/server/node_modules/fibers/future.js:313
throw(ex);
^

Hi @peteranmu,

This may be due to a zombie process that is preventing the cryoSPARC Live app from running correctly.

  1. First, ensure cryoSPARC is turned off:

cryosparcm stop

  1. Then, ensure no zombie processes are still running. Kill the processes if they are.

ps -ax | grep “supervisord” (kill only the process that is running from your cryoSPARC install)
ps -ax | grep “cryosparc_command” (kill all the matching processes related to your cryoSPARC
instance)
ps -ax | grep “mongod” (kill only the process running your cryoSPARC database)

e.g. kill 82681

  1. Remove the socket file inside the /tmp folder

/tmp/cryosparc-supervisor...XXX.sock

  1. Start cryoSPARC:

cryosparcm start

cryosparcm status

Please let me know if that helps.

- Suhail

Hi @sdawood,

I ran the commands step by step, but there were no processes including “supervisord”, “cryosparc_command” and “mongod”. And the socket file /tmp/cryosparc-supervisor…XXX.sock didn’t exist. When I started cryoSPARC again, the problem of app still exists.

By the way, this workstation was reboot by others a few days ago when cryoSPARC was running, so cryoSPARC reported a socket related error. Following the instructions, I deleted the socket file inside the /tmp folder and restart cryoSPARC.

Hi @peteranmu,

Apologies for the delay. We haven’t seen this behaviour before, your best bet will be to try and reinstall with a forced update: https://guide.cryosparc.com/setup-configuration-and-management/software-updates#forced-update

  1. cryosparcm update --override on the master node, then
  2. bin/cryosparcw update --override on each worker node (inside the cryosparc_worker dir)

If you have information in the database you would like to keep please run a backup beforehand: https://guide.cryosparc.com/setup-configuration-and-management/management-and-monitoring/cryosparcm#cryosparcm-backup

Hi @sdawood,

I’ve tried forced update and total reinstallation. But this problem still exists. Now I’m thinking whether it’s an operating-system-level problem. We are using CentOS7 with 3.10.0-862.el7.x86_64 kernel.