After power outtage, cryosparc "could not get database status"

Tropical storm Debby caused a power loss in our building while our workstation was running cryosparc (standalone v4.3.1) and now we can’t get it to start. Other lab adjacent to us running cryosparc on a similar standalone setup deleted a .sock file from the /tmp/ folder and that fixed the issue for them, but did not work for us. The traceback is as follows:


(base) neher-lab@Diana:~$ cryosparcm start
Starting cryoSPARC System master process..
CryoSPARC is not already running.
configuring database
Warning: Could not get database status (attempt 1/3)
Warning: Could not get database status (attempt 2/3)
Warning: Could not get database status (attempt 3/3)
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/neher-lab/cryosparc/cryosparc_master/cryosparc_compute/database_management.py", line 48, in configure_mongo
    initialize_replica_set()
  File "/home/neher-lab/cryosparc/cryosparc_master/cryosparc_compute/database_management.py", line 87, in initialize_replica_set
    admin_db = try_get_pymongo_db(mongo_client)
  File "/home/neher-lab/cryosparc/cryosparc_master/cryosparc_compute/database_management.py", line 250, in try_get_pymongo_db
    admin_db.command(({'serverStatus': 1}))
  File "/home/neher-lab/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/database.py", line 827, in command
    with self.__client._socket_for_reads(read_preference, session) as (sock_info, secondary_ok):
  File "/home/neher-lab/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/contextlib.py", line 113, in __enter__
    return next(self.gen)
  File "/home/neher-lab/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/mongo_client.py", line 1478, in _socket_for_reads
    server = self._select_server(read_preference, session)
  File "/home/neher-lab/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/mongo_client.py", line 1436, in _select_server
    server = topology.select_server(server_selector)
  File "/home/neher-lab/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/topology.py", line 250, in select_server
    return random.choice(self.select_servers(selector, server_selection_timeout, address))
  File "/home/neher-lab/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/topology.py", line 211, in select_servers
    server_descriptions = self._select_servers_loop(selector, server_timeout, address)
  File "/home/neher-lab/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/topology.py", line 226, in _select_servers_loop
    raise ServerSelectionTimeoutError(
pymongo.errors.ServerSelectionTimeoutError: localhost:39001: [Errno 111] Connection refused, Timeout: 20.0s, Topology Description: <TopologyDescription id: 66b5071cad207245265e4d74, topology_type: Single, servers: [<ServerDescription ('localhost', 39001) server_type: Unknown, rtt: None, error=AutoReconnect('localhost:39001: [Errno 111] Connection refused')>]>
[2024-08-08T13:58:59-04:00] Error configuring database. Most recent database log lines:
2024-08-08T13:57:46.238-0400 I STORAGE  [initandlisten] Placing a marker at optime Aug  7 16:37:51:8
2024-08-08T13:57:46.238-0400 I STORAGE  [initandlisten] Placing a marker at optime Aug  7 19:48:01:12
2024-08-08T13:57:46.238-0400 I STORAGE  [initandlisten] Placing a marker at optime Aug  7 21:26:22:1
2024-08-08T13:57:46.238-0400 I STORAGE  [initandlisten] Placing a marker at optime Aug  7 23:04:33:11
2024-08-08T13:57:46.238-0400 I STORAGE  [initandlisten] Placing a marker at optime Aug  8 02:14:58:1
2024-08-08T13:57:46.238-0400 I STORAGE  [initandlisten] Placing a marker at optime Aug  8 03:53:03:3
2024-08-08T13:57:46.238-0400 I STORAGE  [initandlisten] WiredTiger record store oplog processing took 11ms
 Version: Unable to find metadata for table:collection-0-7580726846594704023
2024-08-08T13:57:46.239-0400 F -        [initandlisten] Fatal Assertion 34433 at src/mongo/db/storage/wiredtiger/wiredtiger_record_store.cpp 644
2024-08-08T13:57:46.239-0400 F -        [initandlisten] \n\n***aborting after fassert() failure\n\n
(base) neher-lab@Diana:~$ cryosparcm configuredb
configuring database
Warning: Could not get database status (attempt 1/3)
Warning: Could not get database status (attempt 2/3)
Warning: Could not get database status (attempt 3/3)
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/neher-lab/cryosparc/cryosparc_master/cryosparc_compute/database_management.py", line 48, in configure_mongo
    initialize_replica_set()
  File "/home/neher-lab/cryosparc/cryosparc_master/cryosparc_compute/database_management.py", line 87, in initialize_replica_set
    admin_db = try_get_pymongo_db(mongo_client)
  File "/home/neher-lab/cryosparc/cryosparc_master/cryosparc_compute/database_management.py", line 250, in try_get_pymongo_db
    admin_db.command(({'serverStatus': 1}))
  File "/home/neher-lab/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/database.py", line 827, in command
    with self.__client._socket_for_reads(read_preference, session) as (sock_info, secondary_ok):
  File "/home/neher-lab/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/contextlib.py", line 113, in __enter__
   return next(self.gen)
  File "/home/neher-lab/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/mongo_client.py", line 1478, in _socket_for_reads
    server = self._select_server(read_preference, session)
  File "/home/neher-lab/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/mongo_client.py", line 1436, in _select_server
    server = topology.select_server(server_selector)
  File "/home/neher-lab/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/topology.py", line 250, in select_server
    return random.choice(self.select_servers(selector, server_selection_timeout, address))
  File "/home/neher-lab/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/topology.py", line 211, in select_servers
    server_descriptions = self._select_servers_loop(selector, server_timeout, address)
  File "/home/neher-lab/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/topology.py", line 226, in _select_servers_loop
    raise ServerSelectionTimeoutError(
pymongo.errors.ServerSelectionTimeoutError: localhost:39001: [Errno 111] Connection refused, Timeout: 20.0s, Topology Description: <TopologyDescription id: 66b5078d5f38774def660696, topology_type: Single, servers: [<ServerDescription ('localhost', 39001) server_type: Unknown, rtt: None, error=AutoReconnect('localhost:39001: [Errno 111] Connection refused')>]>
[2024-08-08T14:00:52-04:00] Error configuring database. Most recent database log lines:
2024-08-08T13:59:38.891-0400 I STORAGE  [initandlisten] Placing a marker at optime Aug  7 16:37:51:8
2024-08-08T13:59:38.891-0400 I STORAGE  [initandlisten] Placing a marker at optime Aug  7 19:48:01:12
2024-08-08T13:59:38.891-0400 I STORAGE  [initandlisten] Placing a marker at optime Aug  7 21:26:22:1
2024-08-08T13:59:38.891-0400 I STORAGE  [initandlisten] Placing a marker at optime Aug  7 23:04:33:11
2024-08-08T13:59:38.891-0400 I STORAGE  [initandlisten] Placing a marker at optime Aug  8 02:14:58:1
2024-08-08T13:59:38.891-0400 I STORAGE  [initandlisten] Placing a marker at optime Aug  8 03:53:03:3
2024-08-08T13:59:38.891-0400 I STORAGE  [initandlisten] WiredTiger record store oplog processing took 2ms
 Version: Unable to find metadata for table:collection-0-7580726846594704023
2024-08-08T13:59:38.891-0400 F -        [initandlisten] Fatal Assertion 34433 at src/mongo/db/storage/wiredtiger/wiredtiger_record_store.cpp 644
2024-08-08T13:59:38.891-0400 F -        [initandlisten] \n\n***aborting after fassert() failure\n\n

Trying to follow advice from others who had similar issues, I tried running the command cryosparcm configure db and that did not seem to change anything. I also have the following info in case it’s relevant:


(base) neher-lab@Diana:~$ cryosparcm status
----------------------------------------------------------------------------
CryoSPARC System master node installed at
/home/neher-lab/cryosparc/cryosparc_master
Current cryoSPARC version: v4.3.1
----------------------------------------------------------------------------

CryoSPARC process status:

app                              STOPPED   Not started
app_api                          STOPPED   Not started
app_api_dev                      STOPPED   Not started
app_legacy                       STOPPED   Not started
app_legacy_dev                   STOPPED   Not started
command_core                     STOPPED   Not started
command_rtp                      STOPPED   Not started
command_vis                      STOPPED   Not started
database                         STOPPED   Not started

----------------------------------------------------------------------------
*** CommandClient: (http://Diana:39002/api) URL Error [Errno 111] Connection refused
An error ocurred while checking license status
Could not get license verification status. Are all CryoSPARC processes RUNNING?
(base) neher-lab@Diana:~$ grep HOSTNAME /home/neher-lab/cryosparc/cryosparc_master/config.sh
export CRYOSPARC_MASTER_HOSTNAME="Diana"
(base) neher-lab@Diana:~$ hostname -f
Diana
(base) neher-lab@Diana:~$ host $(hostname -f)
Diana has address 127.0.1.1
(base) neher-lab@Diana:~$ grep ^127 /etc/hosts
127.0.0.1       localhost
127.0.1.1       Diana
(base) neher-lab@Diana:~$ curl pop-os.sickkids.ca:39001
curl: (6) Could not resolve host: pop-os.sickkids.ca
(base) neher-lab@Diana:~$ curl 127.0.0.1:39001
curl: (7) Failed to connect to 127.0.0.1 port 39001 after 0 ms: Couldn't connect to server
(base) neher-lab@Diana:~$ ps -eo user,pid,ppid,start,command | grep -e cryosparc_ -e mongo
neher-l+    4981    1651 13:57:44 python /home/neher-lab/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/supervisord -c /home/neher-lab/cryosparc/cryosparc_master/supervisord.conf
neher-l+    6221    3770 14:06:21 grep --color=auto -e cryosparc_ -e mongo
(base) neher-lab@Diana:~$ ls -l /tmp/cryosparc*sock /tmp/mongo*sock
srwx------ 1 neher-lab neher-lab 0 Aug  8 13:57 /tmp/cryosparc-supervisor-8cb819b607e863420adac1a8efe81e55.sock
srwx------ 1 neher-lab neher-lab 0 Aug  8 13:59 /tmp/mongodb-39001.sock

This might sound odd, and you have probably tried this already, but we have had several similar cases where several cycles of cryosparcm stop & cryospacm start sequentially fixed database issues (making sure to kill any zombie cryosparc processes after each stop). We saw database errors on the first start, second start, and then somehow the database worked on the third start. Good luck!

Unfortunately that didn’t work, we’ll keep looking for solutions, thank you so much!

@akw Please can you post the outputs of these commands

ps -eo user,pid,ppid,start,command | grep -e cryosparc_ -e mongo
cryosparcm log database | grep error | tail -n 60
grep -v LICENSE /home/neher-lab/cryosparc/cryosparc_master/config.sh

By the way,

This command does not apply to this case because $CRYOSPARC_MASTER_HOSTNAME is Diana, not pop-os.sickkids.ca

(base) neher-lab@Diana:~$ ps eo user,pid,ppid,start,command | grep -e cryosparc_ -e mongo
neher-l+   13085    8755 16:07:26 ps eo user,pid,ppid,start,command SHELL=/bin/bash SESSION_MANAGER=local/Diana:@/tmp/.ICE-unix/7484,unix/Diana:/tmp/.ICE-unix/7484 WINDOWID=44040228 QT_ACCESSIBILITY=1 XDG_CONFIG_DIRS=/etc/xdg/xdg-ubuntu:/etc/xdg XDG_SESSION_PATH=/org/freedesktop/DisplayManager/Session2 XDG_MENU_PREFIX=gnome- GNOME_DESKTOP_SESSION_ID=this-is-deprecated CONDA_EXE=/home/neher-lab/anaconda3/bin/conda _CE_M= XTERM_VERSION=XTerm(353) LANGUAGE=en_US:en MANDATORY_PATH=/usr/share/gconf/ubuntu.mandatory.path LC_ADDRESS=en_GB.UTF-8 GNOME_SHELL_SESSION_MODE=ubuntu LC_NAME=en_GB.UTF-8 SSH_AUTH_SOCK=/run/user/1000/keyring/ssh XMODIFIERS=@im=ibus DESKTOP_SESSION=ubuntu LC_MONETARY=en_GB.UTF-8 SSH_AGENT_PID=7449 XTERM_SHELL=/bin/bash GTK_MODULES=gail:atk-bridge PWD=/home/neher-lab GSETTINGS_SCHEMA_DIR=/home/neher-lab/anaconda3/share/glib-2.0/schemas XDG_SESSION_DESKTOP=ubuntu LOGNAME=neher-lab XDG_SESSION_TYPE=x11 CONDA_PREFIX=/home/neher-lab/anaconda3 GPG_AGENT_INFO=/run/user/1000/gnupg/S.gpg-agent:0:1 GSETTINGS_SCHEMA_DIR_CONDA_BACKUP= XAUTHORITY=/home/neher-lab/.Xauthority XDG_GREETER_DATA_DIR=/var/lib/lightdm-data/neher-lab IMOD_CALIB_DIR=/usr/local/ImodCalib GJS_DEBUG_TOPICS=JS ERROR;JS LOG GDM_LANG=en_US HOME=/home/neher-lab IM_CONFIG_PHASE=1 LC_PAPER=en_GB.UTF-8 LANG=en_US.UTF-8 LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36: XDG_CURRENT_DESKTOP=ubuntu:GNOME IMOD_DIR=/usr/local/IMOD CONDA_PROMPT_MODIFIER=(base)  XDG_SEAT_PATH=/org/freedesktop/DisplayManager/Seat0 IMOD_JAVADIR=/usr/local/java INVOCATION_ID=da92b72ace8c412599e7bc6dd5c9c9d3 IMOD_PLUGIN_DIR=/usr/local/IMOD/lib/imodplug XTERM_LOCALE=en_US.UTF-8 MANAGERPID=1699 FOR_DISABLE_STACK_TRACE=1 GJS_DEBUG_OUTPUT=stderr LESSCLOSE=/usr/bin/lesspipe %s %s XDG_SESSION_CLASS=user TERM=xterm LC_IDENTIFICATION=en_GB.UTF-8 _CE_CONDA= DEFAULTS_PATH=/usr/share/gconf/ubuntu.default.path IMOD_QTLIBDIR=/usr/local/IMOD/qtlib LESSOPEN=| /usr/bin/lesspipe %s USER=neher-lab CONDA_SHLVL=1 DISPLAY=:0 SHLVL=0 LC_TELEPHONE=en_GB.UTF-8 QT_IM_MODULE=ibus LC_MEASUREMENT=en_GB.UTF-8 PAPERSIZE=a4 CONDA_PYTHON_EXE=/home/neher-lab/anaconda3/bin/python LD_LIBRARY_PATH=/usr/local/lib: XDG_RUNTIME_DIR=/run/user/1000 CONDA_DEFAULT_ENV=base LC_TIME=en_GB.UTF-8 JOURNAL_STREAM=8:77525 XDG_DATA_DIRS=/usr/share/ubuntu:/usr/local/share:/usr/share:/var/lib/snapd/desktop PATH=/home/neher-lab/cryosparc/cryosparc_master/bin:/home/neher-lab/cryosparc/cryosparc_master/bin:/home/neher-lab/cryosparc_master/bin:/home/neher-lab/cryosparc_master/bin:/home/neher-lab/cryosparc/cryosparc_master/bin:/home/neher-lab/anaconda3/bin:/home/neher-lab/anaconda3/condabin:/home/neher-lab/.local/bin:/home/neher-lab/bin:/usr/local/IMOD/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/IMOD/pythonLink GDMSESSION=ubuntu DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus GIO_LAUNCHED_DESKTOP_FILE_PID=8753 GIO_LAUNCHED_DESKTOP_FILE=/usr/share/applications/debian-xterm.desktop LC_NUMERIC=en_GB.UTF-8 _=/usr/bin/ps
neher-l+   13086    8755 16:07:26 grep --color=auto -e cryosparc_ -e mongo SHELL=/bin/bash SESSION_MANAGER=local/Diana:@/tmp/.ICE-unix/7484,unix/Diana:/tmp/.ICE-unix/7484 WINDOWID=44040228 QT_ACCESSIBILITY=1 XDG_CONFIG_DIRS=/etc/xdg/xdg-ubuntu:/etc/xdg XDG_SESSION_PATH=/org/freedesktop/DisplayManager/Session2 XDG_MENU_PREFIX=gnome- GNOME_DESKTOP_SESSION_ID=this-is-deprecated CONDA_EXE=/home/neher-lab/anaconda3/bin/conda _CE_M= XTERM_VERSION=XTerm(353) LANGUAGE=en_US:en MANDATORY_PATH=/usr/share/gconf/ubuntu.mandatory.path LC_ADDRESS=en_GB.UTF-8 GNOME_SHELL_SESSION_MODE=ubuntu LC_NAME=en_GB.UTF-8 SSH_AUTH_SOCK=/run/user/1000/keyring/ssh XMODIFIERS=@im=ibus DESKTOP_SESSION=ubuntu LC_MONETARY=en_GB.UTF-8 SSH_AGENT_PID=7449 XTERM_SHELL=/bin/bash GTK_MODULES=gail:atk-bridge PWD=/home/neher-lab GSETTINGS_SCHEMA_DIR=/home/neher-lab/anaconda3/share/glib-2.0/schemas XDG_SESSION_DESKTOP=ubuntu LOGNAME=neher-lab XDG_SESSION_TYPE=x11 CONDA_PREFIX=/home/neher-lab/anaconda3 GPG_AGENT_INFO=/run/user/1000/gnupg/S.gpg-agent:0:1 GSETTINGS_SCHEMA_DIR_CONDA_BACKUP= XAUTHORITY=/home/neher-lab/.Xauthority XDG_GREETER_DATA_DIR=/var/lib/lightdm-data/neher-lab IMOD_CALIB_DIR=/usr/local/ImodCalib GJS_DEBUG_TOPICS=JS ERROR;JS LOG GDM_LANG=en_US HOME=/home/neher-lab IM_CONFIG_PHASE=1 LC_PAPER=en_GB.UTF-8 LANG=en_US.UTF-8 LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36: XDG_CURRENT_DESKTOP=ubuntu:GNOME IMOD_DIR=/usr/local/IMOD CONDA_PROMPT_MODIFIER=(base)  XDG_SEAT_PATH=/org/freedesktop/DisplayManager/Seat0 IMOD_JAVADIR=/usr/local/java INVOCATION_ID=da92b72ace8c412599e7bc6dd5c9c9d3 IMOD_PLUGIN_DIR=/usr/local/IMOD/lib/imodplug XTERM_LOCALE=en_US.UTF-8 MANAGERPID=1699 FOR_DISABLE_STACK_TRACE=1 GJS_DEBUG_OUTPUT=stderr LESSCLOSE=/usr/bin/lesspipe %s %s XDG_SESSION_CLASS=user TERM=xterm LC_IDENTIFICATION=en_GB.UTF-8 _CE_CONDA= DEFAULTS_PATH=/usr/share/gconf/ubuntu.default.path IMOD_QTLIBDIR=/usr/local/IMOD/qtlib LESSOPEN=| /usr/bin/lesspipe %s USER=neher-lab CONDA_SHLVL=1 DISPLAY=:0 SHLVL=0 LC_TELEPHONE=en_GB.UTF-8 QT_IM_MODULE=ibus LC_MEASUREMENT=en_GB.UTF-8 PAPERSIZE=a4 CONDA_PYTHON_EXE=/home/neher-lab/anaconda3/bin/python LD_LIBRARY_PATH=/usr/local/lib: XDG_RUNTIME_DIR=/run/user/1000 CONDA_DEFAULT_ENV=base LC_TIME=en_GB.UTF-8 JOURNAL_STREAM=8:77525 XDG_DATA_DIRS=/usr/share/ubuntu:/usr/local/share:/usr/share:/var/lib/snapd/desktop PATH=/home/neher-lab/cryosparc/cryosparc_master/bin:/home/neher-lab/cryosparc/cryosparc_master/bin:/home/neher-lab/cryosparc_master/bin:/home/neher-lab/cryosparc_master/bin:/home/neher-lab/cryosparc/cryosparc_master/bin:/home/neher-lab/anaconda3/bin:/home/neher-lab/anaconda3/condabin:/home/neher-lab/.local/bin:/home/neher-lab/bin:/usr/local/IMOD/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/IMOD/pythonLink GDMSESSION=ubuntu DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus GIO_LAUNCHED_DESKTOP_FILE_PID=8753 GIO_LAUNCHED_DESKTOP_FILE=/usr/share/applications/debian-xterm.desktop LC_NUMERIC=en_GB.UTF-8 _=/usr/bin/grep
(base) neher-lab@Diana:~$ cryosparcm log database | grep error | tail -n 60
2023-06-03T02:44:00.352-0400 E STORAGE  [WTOplogJournalThread] WiredTiger error (0) [1685774640:352963][13406:0x7f69afba6700], file:collection-26--118266353504953689.wt, WT_SESSION.checkpoint: __wt_cache_decr_check_uint64, 219: WT_BTREE.bytes_dirty_leaf was 56638, went negative with decrement of 56734 Raw: [1685774640:352963][13406:0x7f69afba6700], file:collection-26--118266353504953689.wt, WT_SESSION.checkpoint: __wt_cache_decr_check_uint64, 219: WT_BTREE.bytes_dirty_leaf was 56638, went negative with decrement of 56734
2023-06-22T05:17:12.691-0400 E STORAGE  [WTOplogJournalThread] WiredTiger error (0) [1687425432:691557][6196:0x7f1de7888700], file:collection-26--118266353504953689.wt, WT_SESSION.checkpoint: __wt_cache_decr_check_uint64, 219: WT_BTREE.bytes_dirty_leaf was 67972, went negative with decrement of 68068 Raw: [1687425432:691557][6196:0x7f1de7888700], file:collection-26--118266353504953689.wt, WT_SESSION.checkpoint: __wt_cache_decr_check_uint64, 219: WT_BTREE.bytes_dirty_leaf was 67972, went negative with decrement of 68068
2023-07-07T15:56:46.533-0400 E STORAGE  [WTOplogJournalThread] WiredTiger error (0) [1688759806:519241][6234:0x7f65dfbe4700], file:collection-26--118266353504953689.wt, WT_SESSION.checkpoint: __wt_cache_decr_check_uint64, 219: WT_BTREE.bytes_dirty_leaf was 56836, went negative with decrement of 56932 Raw: [1688759806:519241][6234:0x7f65dfbe4700], file:collection-26--118266353504953689.wt, WT_SESSION.checkpoint: __wt_cache_decr_check_uint64, 219: WT_BTREE.bytes_dirty_leaf was 56836, went negative with decrement of 56932
2023-07-17T14:19:54.668-0400 E STORAGE  [WTOplogJournalThread] WiredTiger error (0) [1689617994:668189][626229:0x7f2b823bb700], file:collection-26--118266353504953689.wt, WT_SESSION.checkpoint: __wt_cache_decr_check_uint64, 219: WT_BTREE.bytes_dirty_leaf was 57482, went negative with decrement of 68266 Raw: [1689617994:668189][626229:0x7f2b823bb700], file:collection-26--118266353504953689.wt, WT_SESSION.checkpoint: __wt_cache_decr_check_uint64, 219: WT_BTREE.bytes_dirty_leaf was 57482, went negative with decrement of 68266
2023-10-11T06:10:14.228-0400 E STORAGE  [WTOplogJournalThread] WiredTiger error (0) [1697019014:228776][19597:0x7f3f0fe73700], file:collection-4--1032332849834592155.wt, WT_SESSION.checkpoint: __wt_cache_decr_check_uint64, 219: WT_BTREE.bytes_dirty_leaf was 12229, went negative with decrement of 12453 Raw: [1697019014:228776][19597:0x7f3f0fe73700], file:collection-4--1032332849834592155.wt, WT_SESSION.checkpoint: __wt_cache_decr_check_uint64, 219: WT_BTREE.bytes_dirty_leaf was 12229, went negative with decrement of 12453
2023-11-13T04:02:38.661-0500 E STORAGE  [WTOplogJournalThread] WiredTiger error (0) [1699866158:647756][19597:0x7f3f0fe73700], file:collection-26--118266353504953689.wt, WT_SESSION.checkpoint: __wt_cache_decr_check_uint64, 219: WT_BTREE.bytes_dirty_leaf was 57962, went negative with decrement of 68842 Raw: [1699866158:647756][19597:0x7f3f0fe73700], file:collection-26--118266353504953689.wt, WT_SESSION.checkpoint: __wt_cache_decr_check_uint64, 219: WT_BTREE.bytes_dirty_leaf was 57962, went negative with decrement of 68842
2024-03-14T18:39:00.320-0400 E STORAGE  [WTOplogJournalThread] WiredTiger error (0) [1710455940:320317][84979:0x7f6db1aad700], file:collection-4--1032332849834592155.wt, WT_SESSION.checkpoint: __wt_cache_decr_check_uint64, 219: WT_BTREE.bytes_dirty_leaf was 7801, went negative with decrement of 8025 Raw: [1710455940:320317][84979:0x7f6db1aad700], file:collection-4--1032332849834592155.wt, WT_SESSION.checkpoint: __wt_cache_decr_check_uint64, 219: WT_BTREE.bytes_dirty_leaf was 7801, went negative with decrement of 8025
2024-06-17T23:47:51.034-0400 E STORAGE  [WTOplogJournalThread] WiredTiger error (0) [1718682471:34082][4405:0x7f31b432f700], file:collection-26--118266353504953689.wt, WT_SESSION.checkpoint: __wt_cache_decr_check_uint64, 219: WT_BTREE.bytes_dirty_leaf was 68938, went negative with decrement of 69034 Raw: [1718682471:34082][4405:0x7f31b432f700], file:collection-26--118266353504953689.wt, WT_SESSION.checkpoint: __wt_cache_decr_check_uint64, 219: WT_BTREE.bytes_dirty_leaf was 68938, went negative with decrement of 69034
2024-07-16T08:50:19.566-0400 E STORAGE  [WTOplogJournalThread] WiredTiger error (0) [1721134219:566802][1499442:0x7f38dde40700], file:collection-4--1032332849834592155.wt, WT_SESSION.checkpoint: __wt_cache_decr_check_uint64, 219: WT_BTREE.bytes_dirty_leaf was 7685, went negative with decrement of 7909 Raw: [1721134219:566802][1499442:0x7f38dde40700], file:collection-4--1032332849834592155.wt, WT_SESSION.checkpoint: __wt_cache_decr_check_uint64, 219: WT_BTREE.bytes_dirty_leaf was 7685, went negative with decrement of 7909
2024-07-24T13:07:02.824-0400 E STORAGE  [WTOplogJournalThread] WiredTiger error (0) [1721840822:824056][6703:0x7f5b548ac700], file:collection-26--118266353504953689.wt, WT_SESSION.checkpoint: __wt_cache_decr_check_uint64, 219: WT_BTREE.bytes_dirty_leaf was 57962, went negative with decrement of 68842 Raw: [1721840822:824056][6703:0x7f5b548ac700], file:collection-26--118266353504953689.wt, WT_SESSION.checkpoint: __wt_cache_decr_check_uint64, 219: WT_BTREE.bytes_dirty_leaf was 57962, went negative with decrement of 68842
Binary file (standard input) matches

(base) neher-lab@Diana:~$ grep -v LICENSE /home/neher-lab/cryosparc/cryosparc_master/config.sh

export CRYOSPARC_MASTER_HOSTNAME="Diana"
export CRYOSPARC_DB_PATH="/home/neher-lab/cryosparc/cryosparc_database"
export CRYOSPARC_BASE_PORT=39000
export CRYOSPARC_DEVELOP=false
export CRYOSPARC_INSECURE=false
export CRYOSPARC_CLICK_WRAP=true

@akw please can you repeat the command, but with a dash - inserted before the eo options of ps:
ps -eo instead of ps eo

(base) neher-lab@Diana:~$ ps -eo user,pid,ppid,start,command | grep -e cryosparc_ -e mongo
neher-l+    9446    1699 14:49:07 python /home/neher-lab/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/supervisord -c /home/neher-lab/cryosparc/cryosparc_master/supervisord.conf
neher-l+    9550    9446 14:49:14 mongod --auth --dbpath /home/neher-lab/cryosparc/cryosparc_database --port 39001 --oplogSize 64 --replSet meteor --nojournal --wiredTigerCacheSizeGB 4 --bind_ip_all
neher-l+    9651    9446 14:49:17 python -c import cryosparc_command.command_core as serv; serv.start(port=39002)
neher-l+    9674    9446 14:49:25 python -c import cryosparc_command.command_vis as serv; serv.start(port=39003)
neher-l+    9679    9446 14:49:26 python -c import cryosparc_command.command_rtp as serv; serv.start(port=39005)
neher-l+    9744    9446 14:49:31 /home/neher-lab/cryosparc/cryosparc_master/cryosparc_app/api/nodejs/bin/node ./bundle/main.js
neher-l+   14407    8755 16:31:24 grep --color=auto -e cryosparc_ -e mongo

apologies, thanks for your quick response

This looks normal to me. Please can you run this sequence of commands and post its output:

date
last reboot
ps -eo user,pid,ppid,start,command | grep -e cryosparc_ -e mongo
ls -l /tmp/mongo*sock /tmp/cryosparc*sock
env | grep CRYOSPARC
cryosparcm log database | tail -n 20
cryosparcm status
date

In case none of the commands throws an error, please let us know whether the CryoSPARC app currently performs normally.

Hi, so I don’t know what necessarily occurred with this workstation in my absence (multiple people have been working on this problem), but now cryosparc appears in the command line to start normally and navigating to the application on the browser takes me to a login page where it is now saying “user not found” in response to my saved login information.

If I create a new account, what is going to happen with all the saved data/projects/workspaces? We still have all of the folders locally on this machine.

Here are the outputs to the commands you requested:

(base) neher-lab@Diana:~$ date
Fri  9 Aug 17:22:18 EDT 2024
(base) neher-lab@Diana:~$ last reboot
reboot   system boot  5.15.0-73-generi Fri Aug  9 14:24   still running
reboot   system boot  5.15.0-73-generi Fri Aug  9 12:02 - 14:23  (02:21)
reboot   system boot  5.15.0-73-generi Fri Aug  9 11:38 - 12:01  (00:23)
reboot   system boot  5.15.0-73-generi Fri Aug  9 11:31 - 11:37  (00:06)
reboot   system boot  5.15.0-73-generi Thu Aug  8 13:33 - 11:30  (21:57)
reboot   system boot  5.15.0-73-generi Thu Aug  8 13:12 - 13:31  (00:19)
reboot   system boot  5.15.0-73-generi Thu Aug  8 13:02 - 13:11  (00:08)
reboot   system boot  5.15.0-73-generi Thu Aug  8 12:56 - 13:11  (00:14)
reboot   system boot  5.15.0-73-generi Tue Jul 23 09:51 - 13:11 (16+03:19)

(plus many more I don’t see the need to include–please let me know if I’m incorrect on this)

(base) neher-lab@Diana:~$ ps -eo user,pid,ppid,start,command | grep -e cryosparc_ -e mongo
neher-l+    9446    1699 14:49:07 python /home/neher-lab/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/supervisord -c /home/neher-lab/cryosparc/cryosparc_master/supervisord.conf
neher-l+    9550    9446 14:49:14 mongod --auth --dbpath /home/neher-lab/cryosparc/cryosparc_database --port 39001 --oplogSize 64 --replSet meteor --nojournal --wiredTigerCacheSizeGB 4 --bind_ip_all
neher-l+    9651    9446 14:49:17 python -c import cryosparc_command.command_core as serv; serv.start(port=39002)
neher-l+    9674    9446 14:49:25 python -c import cryosparc_command.command_vis as serv; serv.start(port=39003)
neher-l+    9679    9446 14:49:26 python -c import cryosparc_command.command_rtp as serv; serv.start(port=39005)
neher-l+    9744    9446 14:49:31 /home/neher-lab/cryosparc/cryosparc_master/cryosparc_app/api/nodejs/bin/node ./bundle/main.js
neher-l+   15780    8755 17:23:35 grep --co(base) neher-lab@Diana:~$ ps -eo user,pid,ppid,start,command | grep -e cryosparc_ -e mongo
neher-l+    9446    1699 14:49:07 python /home/neher-lab/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/supervisord -c /home/neher-lab/cryosparc/cryosparc_master/supervisord.conf
neher-l+    9550    9446 14:49:14 mongod --auth --dbpath /home/neher-lab/cryosparc/cryosparc_database --port 39001 --oplogSize 64 --replSet meteor --nojournal --wiredTigerCacheSizeGB 4 --bind_ip_all
neher-l+    9651    9446 14:49:17 python -c import cryosparc_command.command_core as serv; serv.start(port=39002)
neher-l+    9674    9446 14:49:25 python -c import cryosparc_command.command_vis as serv; serv.start(port=39003)
neher-l+    9679    9446 14:49:26 python -c import cryosparc_command.command_rtp as serv; serv.start(port=39005)
neher-l+    9744    9446 14:49:31 /home/neher-lab/cryosparc/cryosparc_master/cryosparc_app/api/nodejs/bin/node ./bundle/main.js
neher-l+   15780    8755 17:23:35 grep --color=auto -e cryosparc_ -e mongo
lor=auto -e cryosparc_ -e mongo
(base) neher-lab@Diana:~$ ls -l /tmp/mongo*sock /tmp/cryosparc*sock
srwx------ 1 neher-lab neher-lab 0 Aug  9 14:49 /tmp/cryosparc-supervisor-8cb819b607e863420adac1a8efe81e55.sock
srwx------ 1 neher-lab neher-lab 0 Aug  9 14:49 /tmp/mongodb-39001.sock

(base) neher-lab@Diana:~$ env | grep CRYOSPARC
(no output)

(base) neher-lab@Diana:~$ cryosparcm log database | tail -n 20
2024-08-09T14:49:34.052-0400 I INDEX    [conn32] build index on: meteor.users properties: { v: 2, key: { services.password.reset.when: 1 }, name: "services.password.reset.when_1", ns: "meteor.users", sparse: true }
2024-08-09T14:49:34.052-0400 I INDEX    [conn32]         building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2024-08-09T14:49:34.052-0400 W STORAGE  [conn32] failed to create WiredTiger bulk cursor: Device or resource busy
2024-08-09T14:49:34.052-0400 W STORAGE  [conn32] falling back to non-bulk cursor for index table:index-99--6453079726582276988
2024-08-09T14:49:34.052-0400 I INDEX    [conn32] build index done.  scanned 0 total records. 0 secs
2024-08-09T14:49:34.052-0400 I COMMAND  [conn32] command meteor.$cmd command: createIndexes { createIndexes: "users", indexes: [ { sparse: true, name: "services.password.reset.when_1", key: { services.password.reset.when: 1 } } ], lsid: { id: UUID("be7c0365-b1a1-4ca5-a426-8a8d2353f49e") }, $clusterTime: { clusterTime: Timestamp(1723229373, 2), signature: { hash: BinData(0, 60A36CE463E946C5EB011794C724B784E5932F32), keyId: 7401213718937206786 } }, $db: "meteor" } numYields:0 reslen:223 locks:{ Global: { acquireCount: { r: 3, w: 3 } }, Database: { acquireCount: { w: 3, W: 1 }, acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 48808, W: 159968 } }, Collection: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_msg 227ms
2024-08-09T14:49:34.077-0400 I INDEX    [conn33] build index on: meteor.users properties: { v: 2, key: { services.password.enroll.when: 1 }, name: "services.password.enroll.when_1", ns: "meteor.users", sparse: true }
2024-08-09T14:49:34.077-0400 I INDEX    [conn33]         building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2024-08-09T14:49:34.077-0400 W STORAGE  [conn33] failed to create WiredTiger bulk cursor: Device or resource busy
2024-08-09T14:49:34.077-0400 W STORAGE  [conn33] falling back to non-bulk cursor for index table:index-100--6453079726582276988
2024-08-09T14:49:34.077-0400 I INDEX    [conn33] build index done.  scanned 0 total records. 0 secs
2024-08-09T14:49:34.078-0400 I COMMAND  [conn33] command meteor.$cmd command: createIndexes { createIndexes: "users", indexes: [ { sparse: true, name: "services.password.enroll.when_1", key: { services.password.enroll.when: 1 } } ], lsid: { id: UUID("a0e3914c-4448-49c2-9a7c-f4c739e9dfd6") }, $clusterTime: { clusterTime: Timestamp(1723229373, 2), signature: { hash: BinData(0, 60A36CE463E946C5EB011794C724B784E5932F32), keyId: 7401213718937206786 } }, $db: "meteor" } numYields:0 reslen:223 locks:{ Global: { acquireCount: { r: 3, w: 3 } }, Database: { acquireCount: { w: 3, W: 1 }, acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 46768, W: 179073 } }, Collection: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_msg 250ms
2024-08-09T14:49:44.440-0400 I NETWORK  [listener] connection accepted from 127.0.0.1:54888 #37 (29 connections now open)
2024-08-09T14:49:44.443-0400 I NETWORK  [conn37] received client metadata from 127.0.0.1:54888 conn37: { driver: { name: "nodejs", version: "4.9.0" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.15.0-73-generic" }, platform: "Node.js v16.17.1, LE (unified)|Node.js v16.17.1, LE (unified)" }
2024-08-09T14:49:44.462-0400 I ACCESS   [conn37] Successfully authenticated as principal cryosparc_user on admin from client 127.0.0.1:54888
2024-08-09T14:54:15.860-0400 I CONTROL  [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2024-08-09T14:54:15.860-0400 I STORAGE  [LogicalSessionCacheRefresh] createCollection: config.system.sessions with generated UUID: a8da050c-93a9-4deb-9fc9-a73428cd86e2
2024-08-09T14:54:15.914-0400 I INDEX    [LogicalSessionCacheRefresh] build index on: config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", ns: "config.system.sessions", expireAfterSeconds: 1800 }
2024-08-09T14:54:15.914-0400 I INDEX    [LogicalSessionCacheRefresh]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2024-08-09T14:54:15.916-0400 I INDEX    [LogicalSessionCacheRefresh] build index done.  scanned 0 total records. 0 secs
(base) neher-lab@Diana:~$ cryosparcm status
----------------------------------------------------------------------------
CryoSPARC System master node installed at
/home/neher-lab/cryosparc/cryosparc_master
Current cryoSPARC version: v4.3.1
----------------------------------------------------------------------------

CryoSPARC process status:

app                              RUNNING   pid 9725, uptime 2:35:23
app_api                          RUNNING   pid 9744, uptime 2:35:21
app_api_dev                      STOPPED   Not started
app_legacy                       STOPPED   Not started
app_legacy_dev                   STOPPED   Not started
command_core                     RUNNING   pid 9651, uptime 2:35:35
command_rtp                      RUNNING   pid 9679, uptime 2:35:26
command_vis                      RUNNING   pid 9674, uptime 2:35:27
database                         RUNNING   pid 9550, uptime 2:35:38

----------------------------------------------------------------------------
License is valid
----------------------------------------------------------------------------

global config variables:
export CRYOSPARC_LICENSE_ID="26edbde0-b8de-11eb-b6fd-cb179b9d4921"
export CRYOSPARC_MASTER_HOSTNAME="Diana"
export CRYOSPARC_DB_PATH="/home/neher-lab/cryosparc/cryosparc_database"
export CRYOSPARC_BASE_PORT=39000
export CRYOSPARC_DEVELOP=false
export CRYOSPARC_INSECURE=false
export CRYOSPARC_CLICK_WRAP=true

If you can find out what they did, a clearer idea of the current state of the installation would accelerate troubleshooting and recovery.

How many users are shown when you run the command

cryosparcm listusers

and how many users would you expect be shown?
What is the output of the commands

cryosparcm icli # enter CryoSPARC python cli
[p['uid'] for p in cli.list_projects()]
exit() # exit cli

?

Yes I agree and am working to try to contact the school IT department to figure out what was tried.

On this workstation I would have expected there to be one user listed, but there are zero as far as I can see:

(base) neher-lab@Diana:~$ cryosparcm listusers
| Name                    | Email                             | Admin    | ID                            |
----------------------------------------------------------------------------------------------------------

(base) neher-lab@Diana:~$ cryosparcm icli # enter CryoSPARC python cli
Python 3.8.15 | packaged by conda-forge | (default, Nov 22 2022, 08:49:35) 
Type 'copyright', 'credits' or 'license' for more information
IPython 7.33.0 -- An enhanced Interactive Python. Type '?' for help.

 connecting to Diana:39002 ...
 cli, rtp, db, gfs and tools ready to use

In [1]: [p['uid'] for p in cli.list_projects()]
Out[1]: []

In [2]: exit() # exit cli

Is there a way to recover our user information?

Hi again,

I’ve discovered there was a step taken to basically create a new cryosparc_database directory, which is why the database status error was no longer appearing and the browser interface was actually accessible (albeit to the login screen, which is where we’re now stuck). It is still saying “user not found” in response to our normal credentials. Would you recommend we create a new account with the same email address and information as before? One of the fields in the page for making a new account asks for a token–is that the same as a license ID? I have the license ID from initial install but I do not see any written record of a ‘token’.

Thank you for your time.

This description suggests that CryoSPARC has been started with a blank database: no error, but also no contents :frowning_face: . In this case, you may rebuild the database from scratch, as described in Cryosparc won't start after migrating the database to a new location - #9 by wtempel
Before doing this, you may want to upgrade CryoSPARC to the latest version and apply the latest patch for that version.
If the CryoSPARC installation remains at version 4.3.1, I strongly recommend ensuring that the file

/home/neher-lab/cryosparc/cryosparc_master/config.sh

includes the line

export CRYOSPARC_MONGO_EXTRA_FLAGS=" "

The space between the quotes is important. CryoSPARC needs to be restarted if the cryosparc_master/config.sh is changed.

Hi wtempel,

I went through this process and initially skipped step number 4 in your linked guide, but I realize now I should not have skipped that step (when I tried to run a job it said I needed to connect a worker).

I was able to make a new user and reattach our existing projects in v4.3.1 of CS. But now when I try the cryosparcw commands, it is saying that command isn’t recognized and ‘which cryosparcw’ returns a blank output. Is there another step when starting over with a blank database where we have to reinstall the worker package?

cryosparcw is probably not in your environment’s path, and does not need to be. Just may just call the cryosparcw command using the command’s absolute path. Based on the path to your cryosparc_master/ directory, the absolute path may, but does not have to, be
/home/neher-lab/cryosparc/cryosparc_worker/bin/cryosparcw
.

Thanks wtempel, that did work to create the connection. However, jobs are still failing with the following message:

License is valid.

Launching job on lane default target localhost …

Running job on remote worker node hostname localhost

Failed to launch! 255
ssh: connect to host localhost port 22: Connection refused

The worker connection that I think was successful gave me the following output:

---------------------------------------------------------------
  CRYOSPARC CONNECT --------------------------------------------
 ---------------------------------------------------------------
  Attempting to register worker localhost to command localhost:39002
  Connecting as unix user neher-lab
  Will register using ssh string: neher-lab@localhost
  If this is incorrect, you should re-run this command with the flag --sshstr <ssh string> 
 ---------------------------------------------------------------
  Connected to master.
 ---------------------------------------------------------------
  Current connected workers:
 ---------------------------------------------------------------
  Autodetecting available GPUs...
  Detected 2 CUDA devices.

   id           pci-bus  name
   ---------------------------------------------------------------
       0      0000:23:00.0  NVIDIA GeForce RTX 2080 Ti
       1      0000:2D:00.0  NVIDIA GeForce RTX 2080 Ti
   ---------------------------------------------------------------
   All devices will be enabled now. 
   This can be changed later using --update
 ---------------------------------------------------------------
  Worker will be registered with SSD cache location /mnt/ssd 
 ---------------------------------------------------------------
  Autodetecting the amount of RAM available...
  This machine has 64.25GB RAM .
 ---------------------------------------------------------------
 ---------------------------------------------------------------
  Registering worker...
  Done.

  You can now launch jobs on the master node and they will be scheduled
  on to this worker node if resource requirements are met.
 ---------------------------------------------------------------
  Final configuration for localhost
               cache_path :  /mnt/ssd
           cache_quota_mb :  None
         cache_reserve_mb :  10000
                     desc :  None
                     gpus :  [{'id': 0, 'mem': 11554848768, 'name': 'NVIDIA GeForce RTX 2080 Ti'}, {'id': 1, 'mem': 11551571968, 'name': 'NVIDIA GeForce RTX 2080 Ti'}]
                 hostname :  localhost
                     lane :  default
             monitor_port :  None
                     name :  localhost
           resource_fixed :  {'SSD': True}
           resource_slots :  {'CPU': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], 'GPU': [0, 1], 'RAM': [0, 1, 2, 3, 4, 5, 6, 7]}
                  ssh_str :  neher-lab@localhost
                    title :  Worker node localhost
                     type :  node
          worker_bin_path :  /home/neher-lab/cryosparc/cryosparc_worker/bin/cryosparcw
 ---------------------------------------------------------------

Does something about this raise any red flags?

Thanks again for your time.

@akw Assuming:

  • this is a CryoSPARC instance with master/worker combined on a single host
  • hostname Diana
  • base port number
    If so, and assuming
  • you could try
/home/neher-lab/cryosparc/cryosparc_worker/bin/cryosparcw connect --master Diana --worker Diana --port 39000 --ssdpath /mnt/ssd
cryosparcm cli "remove_scheduler_target_node('localhost')"

Under the assumptions above, the first command would add the same worker again, using the hostname matching $CRYOSPARC_MASTER_HOSTNAME. With matching master and worker hostname, the need for password-less ssh connection is circumvented.
The second command would remove the older worker configuration. The older worker configuration might have worked if password-less (key-based) ssh connection had been configured.

1 Like