cryoSPARC Live only with legacy version port 39007 available but not 39006

Hello everyone,

Recently we set up a new standalone workstation for cryoSPARC v3.1.0. After the installation, everything works fine (with default port number 39000-39010) without any errors but the NEW cryoSPARC Live web application. I can only access LEGACY cryoSPARC Live application with port 39007 (from my own laptop by ‘ssh -N -f -L localhost:39007:localhost:39007’), but I can’t open the new cryoSPARC Live application with port 39006.

‘cryosparcm status’ shows every process except XXXX_dev is ‘RUNNING’. But I found ‘netstat -tuplen | grep :3900’ shows only port 39000, 39001, 39002, 39003, 39005, 39007 are in LISTEN state.

I’m wondering why cryoSPARC can’t open port 39006 in LISTEN state. I’ve checked there were no any other applications using port 39006 before the installation.

Any suggestions would be helpful.

Best regards,
An Mu

Hi @peteranmu,

Apologies for the delayed response. Are you still encountering this error? The output of cryosparcm status should denote whether certain processes are running. Please ensure that app is RUNNING.

Hi @sdawood ,

Thanks for replying. Yes, I’m still encountering this error, even after I updated cryoSPARC to latest version 3.2.0.

The output of cryosparcm status showed that every process including app except dev version is RUNNING:

Current cryoSPARC version: v3.2.0

CryoSPARC process status:

app RUNNING pid 15760, uptime 0:00:11
app_dev STOPPED Not started
command_core RUNNING pid 11150, uptime 0:10:08
command_rtp RUNNING pid 16196, uptime 0:09:57
command_vis RUNNING pid 13847, uptime 0:09:59
database RUNNING pid 11068, uptime 0:10:13
liveapp RUNNING pid 1472, uptime 0:09:41
liveapp_dev STOPPED Not started
webapp RUNNING pid 1426, uptime 0:09:45
webapp_dev STOPPED Not started


And netstat -nlptu | grep 3900 showed that other 3900* ports are open and in LISTEN state:

tcp 0 0 0.0.0.0:39005 0.0.0.0:* LISTEN 16196/python
tcp 0 0 0.0.0.0:39007 0.0.0.0:* LISTEN 1472/node
tcp 0 0 0.0.0.0:39000 0.0.0.0:* LISTEN 1426/node
tcp 0 0 0.0.0.0:39001 0.0.0.0:* LISTEN 11068/mongod
tcp 0 0 0.0.0.0:39002 0.0.0.0:* LISTEN 11150/python
tcp 0 0 0.0.0.0:39003 0.0.0.0:* LISTEN 13847/python

Best regards,
An Mu

Hi @sdawood,

I found a weird thing just now. Everytime I run cryosparcm status, the app showed a new pid, and uptime was only few seconds. Is this a possible reason of the error?


CryoSPARC process status:

app RUNNING pid 11146, uptime 0:00:06
app_dev STOPPED Not started
command_core RUNNING pid 11150, uptime 2:13:11
command_rtp RUNNING pid 16196, uptime 2:13:00
command_vis RUNNING pid 13847, uptime 2:13:02
database RUNNING pid 11068, uptime 2:13:16
liveapp RUNNING pid 1472, uptime 2:12:44
liveapp_dev STOPPED Not started
webapp RUNNING pid 1426, uptime 2:12:48
webapp_dev STOPPED Not started


Best regards,
An Mu

Hi @peteranmu,

Generally all processes should have around the same uptime so this suggests an error in the Live application process. Could you please reply with the output of the following command:

cryosparcm log app

Hi @sdawood,

This is what cryosparcm log app shows every few seconds:

MongoTimeoutError: Server selection timed out after 10000 ms
at Timeout._onTimeout (/home/cryosparc_user/cryosparc_instance/cryosparc_master/cryosparc_app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_mod
ules/mongodb-core/lib/sdam/topology.js:773:16)
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7) {
[Symbol(mongoErrorContextSymbol)]: {}
}
(node:28055) Warning: Accessing non-existent property ‘count’ of module exports inside circular dependency
(Use node --trace-warnings ... to show where the warning was created)
(node:28055) Warning: Accessing non-existent property ‘findOne’ of module exports inside circular dependency
(node:28055) Warning: Accessing non-existent property ‘remove’ of module exports inside circular dependency
(node:28055) Warning: Accessing non-existent property ‘updateOne’ of module exports inside circular dependency
/home/cryosparc_user/cryosparc_instance/cryosparc_master/cryosparc_app/bundle/programs/server/node_modules/fibers/future.js:313
throw(ex);
^

Hi @peteranmu,

This may be due to a zombie process that is preventing the cryoSPARC Live app from running correctly.

  1. First, ensure cryoSPARC is turned off:

cryosparcm stop

  1. Then, ensure no zombie processes are still running. Kill the processes if they are.

ps -ax | grep “supervisord” (kill only the process that is running from your cryoSPARC install)
ps -ax | grep “cryosparc_command” (kill all the matching processes related to your cryoSPARC
instance)
ps -ax | grep “mongod” (kill only the process running your cryoSPARC database)

e.g. kill 82681

  1. Remove the socket file inside the /tmp folder

/tmp/cryosparc-supervisor...XXX.sock

  1. Start cryoSPARC:

cryosparcm start

cryosparcm status

Please let me know if that helps.

- Suhail

Hi @sdawood,

I ran the commands step by step, but there were no processes including “supervisord”, “cryosparc_command” and “mongod”. And the socket file /tmp/cryosparc-supervisor…XXX.sock didn’t exist. When I started cryoSPARC again, the problem of app still exists.

By the way, this workstation was reboot by others a few days ago when cryoSPARC was running, so cryoSPARC reported a socket related error. Following the instructions, I deleted the socket file inside the /tmp folder and restart cryoSPARC.

Hi @peteranmu,

Apologies for the delay. We haven’t seen this behaviour before, your best bet will be to try and reinstall with a forced update: https://guide.cryosparc.com/setup-configuration-and-management/software-updates#forced-update

  1. cryosparcm update --override on the master node, then
  2. bin/cryosparcw update --override on each worker node (inside the cryosparc_worker dir)

If you have information in the database you would like to keep please run a backup beforehand: https://guide.cryosparc.com/setup-configuration-and-management/management-and-monitoring/cryosparcm#cryosparcm-backup

Hi @sdawood,

I’ve tried forced update and total reinstallation. But this problem still exists. Now I’m thinking whether it’s an operating-system-level problem. We are using CentOS7 with 3.10.0-862.el7.x86_64 kernel.

Hi @peteranmu,

Apologies for the delay. This could be a firewall issue - could you try the options suggested in this topic and let me know if that resolves the issue? Webapp started but page will not launch

- Suhail

Hi @sdawood,
I’ve checked that the firewall was actually not active using firewall-cmd --state. And there is still an error in the Live application process when I run cryosparcm log app :frowning:
An Mu

Hi @peteranmu,

Sorry to hear you’re still having trouble running the application! Do you still see the same error?

Hi @sdawood,
Yes, there is still the same error as before. I’m wondering what cause this “MongoTimeoutError” considering that there were not any zombie processes when running
ps -ax | grep “supervisord”
ps -ax | grep “cryosparc_command”
ps -ax | grep “mongod”

Hi @peteranmu,

Thanks for the additional info. Are you able to connect to the MongoDB shell?

cryosparcm mongo

Hi @sdawood,
Running cryosparcm mongo shows the following:

MongoDB shell version v3.4.10
connecting to: mongodb://localhost:59001/meteor
MongoDB server version: 3.4.10
Welcome to the MongoDB shell.
For interactive help, type “help”.
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
Server has startup warnings:
2021-08-12T18:10:39.133+0800 I CONTROL [initandlisten]
2021-08-12T18:10:39.133+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2021-08-12T18:10:39.133+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2021-08-12T18:10:39.133+0800 I CONTROL [initandlisten]
2021-08-12T18:10:39.134+0800 I CONTROL [initandlisten]
2021-08-12T18:10:39.134+0800 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine.
2021-08-12T18:10:39.134+0800 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems:
2021-08-12T18:10:39.134+0800 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options]
2021-08-12T18:10:39.134+0800 I CONTROL [initandlisten]
2021-08-12T18:10:39.134+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is ‘always’.
2021-08-12T18:10:39.134+0800 I CONTROL [initandlisten] ** We suggest setting it to ‘never’
2021-08-12T18:10:39.134+0800 I CONTROL [initandlisten]
2021-08-12T18:10:39.134+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is ‘always’.
2021-08-12T18:10:39.134+0800 I CONTROL [initandlisten] ** We suggest setting it to ‘never’
2021-08-12T18:10:39.134+0800 I CONTROL [initandlisten]
meteor:PRIMARY>

Hi @sdawood,

I am experiencing a similar problem. In my case cryosparc Live was running well but at some point my WS crushed and restarted. Since then I can not access the Live app through localhost:39006.

I also see that around every 30 sec the app stops running and restarts with a new pid, and when I run cryosparcm log app I observe the same error:
MongoTimeoutError:Server selection timed out after 1000 ms at Timeout._onTimeout

I would appreciate any feedback

Thanks in advance,
Igor