Web interface unresponsive


My cryosparc web interface recently became unresponsive unless I click the refresh button in my web browser.

If I run ‘cryosparcm restart webapp’ it would be responsive for a few minutes, but then became unresponsive again.

This seems to happen right after I ran the database backup command for the first time.

Any suggestion is greatly appreciated!


A running database backup may reduce the responsiveness of the database to other workloads, like queries from the CryoSPARC app.
Another possible source of UI delays can be the mode of access to the UI. What is the full URL you enter into your browser’s address bar to access the CryoSPARC app?

Hi, thanks for your reply! The full URL I use is

The UI delay persists after the database backup job finished (even rebooting the workstation didn’t help).

Using ssh forwarding didn’t seem to make much difference either.

Please can you post the ssh command that you used for port forwarding?

I added this line in my ~/.profile (I use a MacbookPro)

alias cs3=‘lsof -ti:39003 | xargs kill -9; ssh -N -f -L localhost:39003:localhost:39000 -p22 ruiz@

I forgot to mention that I am still using cryosparc v3.4.0

Would not you then connect to http://localhost:39003 instead of


Sorry in my previous post I meant to say
Using ssh forwarding didn’t seem to make much difference either.

What are the outputs of these commands inside a fresh shell on host under the Linux account that owns CryoSPARC processes:

top -b -n 1 | head -n 6
ps -eo pid,ppid,start,pmem,pcpu,cmd | grep -e cryosparc -e mongo
eval $(cryosparcm env) # load CryoSPARC environment
exit # exit the shell

[Edited: corrected variable name]

The CS is currently installed under my own account “ruiz”
The actually /data/cryosparc_dir is on /ssd

Here are the outputs

ruiz@mip:/home/ruiz$ top -b -n 1 | head -n 6
top - 17:12:54 up 1 day, 23:57,  2 users,  load average: 2.05, 1.62, 1.63
Tasks: 2489 total,   1 running, 2486 sleeping,   0 stopped,   2 zombie
%Cpu(s):  0.9 us,  0.2 sy,  0.0 ni, 99.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem : 515677.8 total, 177107.8 free, 331826.5 used,   6743.5 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used. 176549.4 avail Mem

ruiz@mip:/home/ruiz$ ps -eo pid,ppid,start,pmem,pcpu,cmd | grep -e cryosparc -e mongo
   7985       1   Mar 04  0.0  0.0 python /data/bin/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/supervisord -c /data/bin/cryosparc/cryosparc_master/supervisord.conf
   7987    7985   Mar 04  0.6  3.7 mongod --dbpath /data/bin/cryosparc/cryosparc_database --port 39001 --oplogSize 64 --replSet meteor --nojournal --wiredTigerCacheSizeGB 4 --bind_ip_all
   8129    7985   Mar 04  0.0  0.8 python -c import cryosparc_command.command_core as serv; serv.start(port=39002)
   8176    7985   Mar 04  0.1  0.6 python -c import cryosparc_command.command_rtp as serv; serv.start(port=39005)
   8326    7985   Mar 04  0.0  0.8 /data/bin/cryosparc/cryosparc_master/cryosparc_app/api/nodejs/bin/node ./bundle/main.js
  81729    8129 16:20:13  0.0  0.0 bash /data/bin/cryosparc/cryosparc_worker/bin/cryosparcw run --project P3 --job J3010 --master_hostname mip.wustl.edu --master_command_core_port 39002
  81792   81729 16:20:15  0.0  0.0 python -c import cryosparc_compute.run as run; run.run() --project P3 --job J3010 --master_hostname mip.wustl.edu --master_command_core_port 39002
  81793   81792 16:20:16  8.6  115 python -c import cryosparc_compute.run as run; run.run() --project P3 --job J3010 --master_hostname mip.wustl.edu --master_command_core_port 39002
 121546    7985 16:50:48  0.0  0.6 /data/bin/cryosparc/cryosparc_master/cryosparc_webapp/nodejs/bin/node ./bundle/main.js
 136182  136083 17:13:01  0.0  0.0 grep -e cryosparc -e mongo
2152161    7985   Mar 04  0.0  0.3 python -c import cryosparc_command.command_vis as serv; serv.start(port=39003)
ruiz@mip:/home/ruiz$ eval $(cryosparcm env)
ruiz@mip:/home/ruiz$ df -h $CRYOSPARC_DATABASE_PATH
Filesystem                   Size  Used Avail Use% Mounted on
tmpfs                         51G   60M   51G   1% /run
/dev/md0p3                   1.8T   67G  1.6T   5% /
tmpfs                        252G     0  252G   0% /dev/shm
tmpfs                        5.0M  4.0K  5.0M   1% /run/lock
/dev/sdc1                     19T  480G   17T   3% /data
/dev/md0p1                   974M  263M  645M  29% /boot
/dev/sda1                    1.1G  6.1M  1.1G   1% /boot/efi
ssd                           56T   47T  9.5T  84% /ssd
zhangnas11gb:/mnt/tank/data  262T  198T   64T  76% /zhangnas1/data
tmpfs                         51G   64K   51G   1% /run/user/1906098
ruiz@mip:/home/ruiz$ exit
Connection to closed.

@zhangrui_wustl Thanks for posting this information.
Please can you email us

  • browser debugging Console and Network logs. Please record this information while you are experiencing unresponsiveness. Please reload the UI page between clicking the browser Developer Tools Network tab and downloading the har file. Please compress the browser console log and network har file before attaching it to the email.
  • the tgz file created by the command
    cryosparcm snaplogs
    (command not available in v3.4.0)
    tar -zcvf snaplogs20240306.tar.gz /data/bin/cryosparc/cryosparc_master/run/*log

I will send you a direct message about the e-mail address.

@zhangrui_wustl According to the /data/bin/cryosparc/cryosparc_master/run/supervisord.log file you shared with us, the command_vis process is frequently terminated by a SIGABRT signal. Unfortunately, we do not know why. We can only suggest to

  1. allow all running CryoSPARC jobs to complete or terminate those jobs.
  2. stop CryoSPARC and confirm all related processes have been terminated (guide).
  3. reboot the server

but we are unfortunately not sure this will resolve the problem.

Hi @wtempel Thank you so much for looking into it! We’ll do steps 1-2-3 and let you know.