Error in updating v5.0.1 to v5.0.4

Last night, I tried to update cryosparc v5.0.1 to v5.0.4, some error happened. I tried clean stop cryosparc jobs and check the /tmp/.sock files, but it still does not work? Could anyone help this?

cryosparc@jyh3c:/home/jy_lab$ cryosparcm update --version=v5.0.4
----------------------
CryoSPARC update to v5.0.4
Starting on 2026? 04? 06? ??? 22:57:14 CST
----------------------
Removing previous downloads...
Downloading master update...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    99    0     0  100    99      0     69  0:00:01  0:00:01 --:--:--    69
100  609M  100  609M    0     0  50186      0  3:32:07  3:32:07 --:--:--  439k
Downloading worker update...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    99    0     0  100    99      0     20  0:00:04  0:00:04 --:--:--    20
100 4080M  100 4080M    0     0  2650k      0  0:26:16  0:26:16 --:--:-- 4236k
Done.

Update will now be applied to the master installation, 
followed by any connected workers.

CryoSPARC is not already running.
Deleting old files...
Extracting...
Done.
----------------------------
Installing dependencies... 
----------------------------
Checking python dependency...
Installing python dependency...
Extracting all packages...
tar: 102389760 (98MiB, 1.8MiB/s)
tar: 204789760 (196MiB, 2.7MiB/s)
tar: 307189760 (293MiB, 3.7MiB/s)
tar: 409589760 (391MiB, 4.8MiB/s)
tar: 511989760 (489MiB, 2.4MiB/s)
tar: 614389760 (586MiB, 2.1MiB/s)
tar: 716789760 (684MiB, 2.5MiB/s)
tar: 819189760 (782MiB, 2.8MiB/s)
tar: 921589760 (879MiB, 3.1MiB/s)
tar: 1023989760 (977MiB, 2.9MiB/s)
tar: 1126389760 (1.1GiB, 2.7MiB/s)

Python packages installation successful.
python installation completed.
-------------------------------------
Checking mongodb dependency...
mongodb dependency has not changed.
-------------------------------------
Dependencies installed. 
Successfully updated master installation files to version v5.0.4.
-------------------------------------
Attempting initial startup of database and core services...
Starting CryoSPARC master services...
CryoSPARC is not already running.
Configuring database... Warning: Could not get database status (attempt 1/3)
Warning: Could not get database status (attempt 2/3)
OK
database: started
Checking database... 
??????????????????????? Traceback (most recent call last) ???????????????????????
? /data3/cryosp_projs/cryosparc/cryosparc_master/cli/database.py:46 in check    ?
?                                                                               ?
?    43 ?   ?   out.log("Checking database... ", end="")                        ?
?    44 ?   logfile = core.settings.run_dir / "database.log"                    ?
?    45 ?   try:                                                                ?
? ?  46 ?   ?   database_management.check_mongo()                               ?
?    47 ?   except Exception:                                                   ?
?    48 ?   ?   if not quiet:                                                   ?
?    49 ?   ?   ?   err.log()                                                   ?
?                                                                               ?
? /data3/cryosp_projs/cryosparc/cryosparc_master/core/database_management.py:28 ?
? 8 in check_mongo                                                              ?
?                                                                               ?
?   285 ?                                                                       ?
?   286 ?   try:                                                                ?
?   287 ?   ?   # check replset                                                 ?
? ? 288 ?   ?   admin_db = try_get_pymongo_db(mongo_client)                     ?
?   289 ?   ?   replset_status = admin_db.command("replSetGetStatus")           ?
?   290 ?   ?   members = replset_status.get("members")                         ?
?   291 ?   ?   if not members or not members[0]:                               ?
?                                                                               ?
? /data3/cryosp_projs/cryosparc/cryosparc_master/core/database_management.py:26 ?
? 2 in try_get_pymongo_db                                                       ?
?                                                                               ?
?   259 ?   ?   tries += 1                                                      ?
?   260 ?   ?   try:                                                            ?
?   261 ?   ?   ?   admin_db = mongo_client.get_database(db_name)               ?
? ? 262 ?   ?   ?   admin_db.command(({"serverStatus": 1}))                     ?
?   263 ?   ?   ?   return admin_db                                             ?
?   264 ?   ?   except Exception:                                               ?
?   265 ?   ?   ?   print(f"Warning: Could not get database status (attempt {tr ?
?                                                                               ?
? /data3/cryosp_projs/cryosparc/cryosparc_master/.pixi/envs/master/lib/python3. ?
? 12/site-packages/pymongo/_csot.py:119 in csot_wrapper                         ?
?                                                                               ?
?   116 ?   ?   ?   ?   if timeout is not None:                                 ?
?   117 ?   ?   ?   ?   ?   with _TimeoutContext(timeout):                      ?
?   118 ?   ?   ?   ?   ?   ?   return func(self, *args, **kwargs)              ?
? ? 119 ?   ?   ?   return func(self, *args, **kwargs)                          ?
?   120 ?                                                                       ?
?   121 ?   return cast(F, csot_wrapper)                                        ?
?   122                                                                         ?
?                                                                               ?
? /data3/cryosp_projs/cryosparc/cryosparc_master/.pixi/envs/master/lib/python3. ?
? 12/site-packages/pymongo/synchronous/database.py:926 in command               ?
?                                                                               ?
?    923 ?   ?                                                                  ?
?    924 ?   ?   if read_preference is None:                                    ?
?    925 ?   ?   ?   read_preference = (session and session._txn_read_preferenc ?
? ?  926 ?   ?   with self._client._conn_for_reads(read_preference, session, op ?
?    927 ?   ?   ?   connection,                                                ?
?    928 ?   ?   ?   read_preference,                                           ?
?    929 ?   ?   ):                                                             ?
?                                                                               ?
? /data3/cryosp_projs/cryosparc/cryosparc_master/.pixi/envs/master/lib/python3. ?
? 12/site-packages/pymongo/synchronous/mongo_client.py:1701 in _conn_for_reads  ?
?                                                                               ?
?   1698 ?   ?   operation: str,                                                ?
?   1699 ?   ) -> ContextManager[tuple[Connection, _ServerMode]]:               ?
?   1700 ?   ?   assert read_preference is not None, "read_preference must not  ?
? ? 1701 ?   ?   server = self._select_server(read_preference, session, operati ?
?   1702 ?   ?   return self._conn_from_server(read_preference, server, session ?
?   1703 ?                                                                      ?
?   1704 ?   @_csot.apply                                                       ?
?                                                                               ?
? /data3/cryosp_projs/cryosparc/cryosparc_master/.pixi/envs/master/lib/python3. ?
? 12/site-packages/pymongo/synchronous/mongo_client.py:1649 in _select_server   ?
?                                                                               ?
?   1646 ?   ?   ?   ?   if not server:                                         ?
?   1647 ?   ?   ?   ?   ?   raise AutoReconnect("server %s:%s no longer availa ?
?   1648 ?   ?   ?   else:                                                      ?
? ? 1649 ?   ?   ?   ?   server = topology.select_server(                       ?
?   1650 ?   ?   ?   ?   ?   server_selector,                                   ?
?   1651 ?   ?   ?   ?   ?   operation,                                         ?
?   1652 ?   ?   ?   ?   ?   deprioritized_servers=deprioritized_servers,       ?
?                                                                               ?
? /data3/cryosp_projs/cryosparc/cryosparc_master/.pixi/envs/master/lib/python3. ?
? 12/site-packages/pymongo/synchronous/topology.py:398 in select_server         ?
?                                                                               ?
?    395 ?   ?   operation_id: Optional[int] = None,                            ?
?    396 ?   ) -> Server:                                                       ?
?    397 ?   ?   """Like select_servers, but choose a random server if several  ?
? ?  398 ?   ?   server = self._select_server(                                  ?
?    399 ?   ?   ?   selector,                                                  ?
?    400 ?   ?   ?   operation,                                                 ?
?    401 ?   ?   ?   server_selection_timeout,                                  ?
?                                                                               ?
? /data3/cryosp_projs/cryosparc/cryosparc_master/.pixi/envs/master/lib/python3. ?
? 12/site-packages/pymongo/synchronous/topology.py:376 in _select_server        ?
?                                                                               ?
?    373 ?   ?   deprioritized_servers: Optional[list[Server]] = None,          ?
?    374 ?   ?   operation_id: Optional[int] = None,                            ?
?    375 ?   ) -> Server:                                                       ?
? ?  376 ?   ?   servers = self.select_servers(                                 ?
?    377 ?   ?   ?   selector, operation, server_selection_timeout, address, op ?
?    378 ?   ?   )                                                              ?
?    379 ?   ?   servers = _filter_servers(servers, deprioritized_servers)      ?
?                                                                               ?
? /data3/cryosp_projs/cryosparc/cryosparc_master/.pixi/envs/master/lib/python3. ?
? 12/site-packages/pymongo/synchronous/topology.py:283 in select_servers        ?
?                                                                               ?
?    280 ?   ?   ?   server_timeout = server_selection_timeout                  ?
?    281 ?   ?                                                                  ?
?    282 ?   ?   with self._lock:                                               ?
? ?  283 ?   ?   ?   server_descriptions = self._select_servers_loop(           ?
?    284 ?   ?   ?   ?   selector, server_timeout, operation, operation_id, add ?
?    285 ?   ?   ?   )                                                          ?
?    286                                                                        ?
?                                                                               ?
? /data3/cryosp_projs/cryosparc/cryosparc_master/.pixi/envs/master/lib/python3. ?
? 12/site-packages/pymongo/synchronous/topology.py:333 in _select_servers_loop  ?
?                                                                               ?
?    330 ?   ?   ?   ?   ?   ?   clientId=self.description._topology_settings._ ?
?    331 ?   ?   ?   ?   ?   ?   failure=self._error_message(selector),         ?
?    332 ?   ?   ?   ?   ?   )                                                  ?
? ?  333 ?   ?   ?   ?   raise ServerSelectionTimeoutError(                     ?
?    334 ?   ?   ?   ?   ?   f"{self._error_message(selector)}, Timeout: {timeo ?
?    335 ?   ?   ?   ?   )                                                      ?
?    336                                                                        ?
--------------------------------
ServerSelectionTimeoutError: jyh3c:39001: [Errno 111] Connection refused 
(configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms), 
Timeout: 30.0s, Topology Description: <TopologyDescription id: 
69d405313268109c026468f2, topology_type: Unknown, servers: [<ServerDescription 
('jyh3c', 39001) server_type: Unknown, rtt: None, 
error=AutoReconnect('jyh3c:39001: [Errno 111] Connection refused (configured 
timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms)')>]>
2026-04-07T03:12:22.096977 Error checking database; could not get replica set 
status.  Please reconfigure the database with `cryosparcm database configure`.  
Most recent database log lines:
2026-04-07T03:10:40.637+0800 I STORAGE  [initandlisten] 
2026-04-07T03:10:40.637+0800 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2026-04-07T03:10:40.637+0800 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2026-04-07T03:10:40.637+0800 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=4096M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2026-04-07T03:10:42.848+0800 I STORAGE  [initandlisten] WiredTiger message [1775502642:848307][1936425:0x79cd952f7a40], txn-recover: Main recovery loop: starting at 17116/896 to 17117/256
2026-04-07T03:10:42.902+0800 I STORAGE  [initandlisten] WiredTiger message [1775502642:902978][1936425:0x79cd952f7a40], txn-recover: Recovering log 17116 through 17117
2026-04-07T03:10:43.864+0800 I STORAGE  [initandlisten] WiredTiger message [1775502643:864970][1936425:0x79cd952f7a40], txn-recover: Recovering log 17117 through 17117
2026-04-07T03:10:43.893+0800 I STORAGE  [initandlisten] WiredTiger message [1775502643:893268][1936425:0x79cd952f7a40], txn-recover: Set global recovery timestamp: 0
2026-04-07T03:10:49.657+0800 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
2026-04-07T03:10:49.658+0800 I STORAGE  [initandlisten] Starting to check the table logging settings for existing WiredTiger tables
2026-04-07T03:10:53.512+0800 I STORAGE  [initandlisten] Starting OplogTruncaterThread local.oplog.rs
2026-04-07T03:10:53.512+0800 I STORAGE  [initandlisten] The size storer reports that the oplog contains 15712 records totaling to 62219025 bytes
2026-04-07T03:10:53.512+0800 I STORAGE  [initandlisten] Sampling the oplog to determine where to place markers for truncation
2026-04-07T03:10:53.514+0800 I STORAGE  [initandlisten] Sampling from the oplog between Apr  5 18:08:48:2 and Apr  7 03:08:44:4 to determine where to place markers for truncation
2026-04-07T03:10:53.514+0800 I STORAGE  [initandlisten] Taking 92 samples and assuming that each section of oplog contains approximately 1695 records totaling to 6712146 bytes
2026-04-07T03:10:53.518+0800 I STORAGE  [initandlisten] Placing a marker at optime Apr  6 12:10:53:1
2026-04-07T03:10:53.518+0800 I STORAGE  [initandlisten] Placing a marker at optime Apr  6 12:59:26:1
2026-04-07T03:10:53.518+0800 I STORAGE  [initandlisten] Placing a marker at optime Apr  6 13:36:28:1
2026-04-07T03:10:53.518+0800 I STORAGE  [initandlisten] Placing a marker at optime Apr  6 14:25:02:1
2026-04-07T03:10:53.518+0800 I STORAGE  [initandlisten] Placing a marker at optime Apr  6 15:01:34:2
2026-04-07T03:10:53.518+0800 I STORAGE  [initandlisten] Placing a marker at optime Apr  6 15:38:36:1
2026-04-07T03:10:53.518+0800 I STORAGE  [initandlisten] Placing a marker at optime Apr  6 16:27:10:2
2026-04-07T03:10:53.518+0800 I STORAGE  [initandlisten] Placing a marker at optime Apr  6 17:04:12:2
2026-04-07T03:10:53.518+0800 I STORAGE  [initandlisten] Placing a marker at optime Apr  6 17:52:19:13
2026-04-07T03:10:53.518+0800 I STORAGE  [initandlisten] WiredTiger record store oplog processing took 5ms
2026-04-07T03:11:08.886+0800 I CONTROL  [initandlisten] 
2026-04-07T03:11:08.886+0800 I CONTROL  [initandlisten] ** WARNING: You are running on a NUMA machine.
2026-04-07T03:11:08.886+0800 I CONTROL  [initandlisten] **          We suggest launching mongod like this to avoid performance problems:
2026-04-07T03:11:08.886+0800 I CONTROL  [initandlisten] **              numactl --interleave=all mongod [other options]
2026-04-07T03:11:08.886+0800 I CONTROL  [initandlisten] 
Warning: Could not get database status (attempt 1/3)
Warning: Could not get database status (attempt 2/3)
Warning: Could not get database status (attempt 3/3)
cryosparcm ERROR: Startup failed! Please address the above error and retry installation with:
    cryosparcm stop
    cryosparcm install 
cryosparc@jyh3c:/home/jy_lab$ cryosparcm stop
Stopping CryoSPARC ...
database: stopped
Shut down
cryosparc@jyh3c:/home/jy_lab$ cryosparcm status
-----------------------------------------
CryoSPARC System master node installed at
/data3/cryosp_projs/cryosparc/cryosparc_master
Current CryoSPARC version: v5.0.4
-----------------------------------------

CryoSPARC is not running.

-----------------------------------------
? License is valid
-----------------------------------------

global config variables:
export CRYOSPARC_LICENSE_ID="36038a70-48ec-11ef-ace8-d79aea7fa0e5"
export CRYOSPARC_MASTER_HOSTNAME="jyh3c"
export CRYOSPARC_DB_PATH="/data3/cryosp_projs/cryosparc/cryosparc_database"
export CRYOSPARC_BASE_PORT=39000
export CRYOSPARC_DB_CONNECTION_TIMEOUT_MS=30000
export CRYOSPARC_INSECURE=false
export CRYOSPARC_DB_ENABLE_AUTH=true
export CRYOSPARC_CLUSTER_JOB_MONITOR_INTERVAL=10
export CRYOSPARC_CLUSTER_JOB_MONITOR_MAX_RETRIES=1000000
export CRYOSPARC_PROJECT_DIR_PREFIX='CS-'
export CRYOSPARC_DEVELOP=false
export CRYOSPARC_CLICK_WRAP=true
export CRYOSPARC_SSD_CACHE_LIFETIME_DAYS=5
export CRYOSPARC_FORCE_USER=true

cryosparc@jyh3c:/home/jy_lab$ ps -eo user,pid,ppid,start,command | grep -e cryosparc_ -e mongo
cryospa+ 1936887 1934104 09:53:50 grep --color=auto -e cryosparc_ -e mongo
cryosparc@jyh3c:/home/jy_lab$ ls -l /tmp/mongo*.sock /tmp/cryosparc*.sock
ls: cannot access '/tmp/mongo*.sock': No such file or directory
ls: cannot access '/tmp/cryosparc*.sock': No such file or directory
cryosparc@jyh3c:/home/jy_lab$ cryosparcm log database | tail -n 60
2026-04-07T03:12:33.536+0800 I REPL     [rsSync-0] Starting oplog application
2026-04-07T03:12:33.537+0800 I REPL     [rsSync-0] transition to SECONDARY from RECOVERING
2026-04-07T03:12:33.537+0800 I REPL     [rsSync-0] conducting a dry run election to see if we could be elected. current term: 93
2026-04-07T03:12:33.537+0800 I NETWORK  [listener] connection accepted from 127.0.0.1:49724 #2 (1 connection now open)
2026-04-07T03:12:33.537+0800 I REPL     [replexec-0] dry election run succeeded, running for election in term 94
2026-04-07T03:12:33.538+0800 I REPL     [replexec-0] election succeeded, assuming primary role in term 94
2026-04-07T03:12:33.538+0800 I REPL     [replexec-0] transition to PRIMARY from SECONDARY
2026-04-07T03:12:33.538+0800 I REPL     [replexec-0] Resetting sync source to empty, which was :27017
2026-04-07T03:12:33.538+0800 I REPL     [replexec-0] Entering primary catch-up mode.
2026-04-07T03:12:33.538+0800 I REPL     [replexec-0] Exited primary catch-up mode.
2026-04-07T03:12:33.538+0800 I REPL     [replexec-0] Stopping replication producer
2026-04-07T03:12:33.538+0800 I REPL     [ReplBatcher] Oplog buffer has been drained in term 94
2026-04-07T03:12:33.538+0800 I NETWORK  [conn2] received client metadata from 127.0.0.1:49724 conn2: { driver: { name: "NetworkInterfaceTL", version: "4.0.28" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "22.04" } }
2026-04-07T03:12:33.538+0800 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set meteor
2026-04-07T03:12:33.539+0800 I REPL     [rsSync-0] transition to primary complete; database writes are now permitted
2026-04-07T03:12:34.039+0800 I NETWORK  [LogicalSessionCacheRefresh] localhost:39001 detected as new replica set primary for meteor; Old primary was :27017
2026-04-07T03:12:34.040+0800 I NETWORK  [listener] connection accepted from 127.0.0.1:49730 #5 (2 connections now open)
2026-04-07T03:12:34.040+0800 I NETWORK  [listener] connection accepted from 127.0.0.1:49738 #6 (3 connections now open)
2026-04-07T03:12:34.041+0800 I NETWORK  [conn5] received client metadata from 127.0.0.1:49730 conn5: { driver: { name: "MongoDB Internal Client", version: "4.0.28" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "22.04" } }
2026-04-07T03:12:34.041+0800 I NETWORK  [conn6] received client metadata from 127.0.0.1:49738 conn6: { driver: { name: "MongoDB Internal Client", version: "4.0.28" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "22.04" } }
2026-04-07T03:12:34.041+0800 I NETWORK  [LogicalSessionCacheRefresh] Successfully connected to localhost:39001 (1 connections now open to localhost:39001 with a 0 second timeout)
2026-04-07T03:12:34.041+0800 I NETWORK  [LogicalSessionCacheReap] Successfully connected to localhost:39001 (2 connections now open to localhost:39001 with a 0 second timeout)
2026-04-07T03:12:34.172+0800 I COMMAND  [LogicalSessionCacheReap] command config.transactions command: find { find: "transactions", filter: { lastWriteDate: { $lt: new Date(1775500953535) } }, projection: { _id: 1 }, sort: { _id: 1 }, $db: "config" } planSummary: IXSCAN { _id: 1 } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:233 locks:{ Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 130ms
2026-04-07T03:12:34.654+0800 I STORAGE  [WTJournalFlusher] Triggering the first stable checkpoint. Initial Data: Timestamp(1775502524, 4) PrevStable: Timestamp(0, 0) CurrStable: Timestamp(1775502753, 2)
2026-04-07T03:17:34.039+0800 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:39001 due to NetworkInterfaceExceededTimeLimit: Connection pool has been idle for longer than the host timeout
2026-04-07T03:17:34.042+0800 I NETWORK  [conn2] end connection 127.0.0.1:49724 (2 connections now open)
2026-04-07T09:49:51.674+0800 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2026-04-07T09:49:51.674+0800 I CONTROL  [signalProcessingThread] Shutdown started
2026-04-07T09:49:51.674+0800 I CONTROL  [signalProcessingThread] Shutting down the LogicalSessionCache
2026-04-07T09:49:51.675+0800 I NETWORK  [signalProcessingThread] shutdown: going to close listening sockets...
2026-04-07T09:49:51.675+0800 I NETWORK  [signalProcessingThread] removing socket file: /tmp/mongodb-39001.sock
2026-04-07T09:49:51.675+0800 I NETWORK  [signalProcessingThread] Shutting down the global connection pool
2026-04-07T09:49:51.675+0800 I STORAGE  [signalProcessingThread] Shutting down the PeriodicThreadToAbortExpiredTransactions
2026-04-07T09:49:51.675+0800 I REPL     [signalProcessingThread] Shutting down the ReplicationCoordinator
2026-04-07T09:49:51.675+0800 I REPL     [signalProcessingThread] shutting down replication subsystems
2026-04-07T09:49:51.675+0800 I REPL     [signalProcessingThread] Stopping replication reporter thread
2026-04-07T09:49:51.675+0800 I REPL     [signalProcessingThread] Stopping replication fetcher thread
2026-04-07T09:49:51.676+0800 I REPL     [signalProcessingThread] Stopping replication applier thread
2026-04-07T09:49:51.676+0800 I REPL     [rsBackgroundSync] Stopping replication producer
2026-04-07T09:49:51.915+0800 I REPL     [rsSync-0] Finished oplog application
2026-04-07T09:49:51.916+0800 I REPL     [signalProcessingThread] Stopping replication storage threads
2026-04-07T09:49:51.916+0800 I ASIO     [RS] Killing all outstanding egress activity.
2026-04-07T09:49:51.916+0800 I ASIO     [RS] Killing all outstanding egress activity.
2026-04-07T09:49:51.917+0800 I ASIO     [Replication] Killing all outstanding egress activity.
2026-04-07T09:49:51.917+0800 I SHARDING [signalProcessingThread] Shutting down the ShardingInitializationMongoD
2026-04-07T09:49:51.917+0800 I COMMAND  [signalProcessingThread] Killing all open transactions
2026-04-07T09:49:51.917+0800 I -        [signalProcessingThread] Killing all operations for shutdown
2026-04-07T09:49:51.917+0800 I NETWORK  [signalProcessingThread] Shutting down the ReplicaSetMonitor
2026-04-07T09:49:51.917+0800 I ASIO     [ReplicaSetMonitor-TaskExecutor] Killing all outstanding egress activity.
2026-04-07T09:49:51.918+0800 I REPL     [signalProcessingThread] Shutting down the LogicalTimeValidator
2026-04-07T09:49:51.918+0800 I CONTROL  [signalProcessingThread] Shutting down free monitoring
2026-04-07T09:49:51.918+0800 I FTDC     [signalProcessingThread] Shutting down full-time data capture
2026-04-07T09:49:51.918+0800 I FTDC     [signalProcessingThread] Shutting down full-time diagnostic data capture
2026-04-07T09:49:51.924+0800 I STORAGE  [signalProcessingThread] Shutting down the HealthLog
2026-04-07T09:49:51.924+0800 I STORAGE  [signalProcessingThread] Shutting down the storage engine
2026-04-07T09:49:51.924+0800 I STORAGE  [WTOplogJournalThread] oplog journal thread loop shutting down
2026-04-07T09:49:51.924+0800 I STORAGE  [signalProcessingThread] WiredTigerKVEngine shutting down
2026-04-07T09:49:51.925+0800 I STORAGE  [signalProcessingThread] Shutting down session sweeper thread
2026-04-07T09:49:51.925+0800 I STORAGE  [signalProcessingThread] Finished shutting down session sweeper thread
2026-04-07T09:49:55.288+0800 I STORAGE  [signalProcessingThread] Downgrading WiredTiger datafiles.
cryosparc@jyh3c:/home/jy_lab$ cryosparcm log command_core | tail -n 40
Usage: cryosparcm log [OPTIONS] SERVICE:{app|database|cache|api|scheduler|comm
                      and_vis|app_api|supervisord}
Try 'cryosparcm log --help' for help.
?? Error ????????????????????????????????????????????????????????????????????????????????????????????????????????
? Invalid value for 'SERVICE:{app|database|cache|api|scheduler|command_vis|app_api|supervisord}':               ?
? 'command_core' is not one of 'app', 'database', 'cache', 'api', 'scheduler', 'command_vis', 'app_api',        ?
? 'supervisord'.                                                                                                ?
?????????????????????????????????????????????????????????????????????????????????????????????????????????????????
cryosparc@jyh3c:/home/jy_lab$ cryosparcm log supervisord | tail -n 40
2026-04-06 20:47:02,255 INFO Creating socket tcp://0.0.0.0:39002
2026-04-06 20:47:02,255 INFO Closing socket tcp://0.0.0.0:39002
2026-04-06 20:47:02,257 INFO RPC interface 'supervisor' initialized
2026-04-06 20:47:02,260 INFO daemonizing the supervisord process
2026-04-06 20:47:02,261 INFO supervisord started with pid 1932359
2026-04-06 21:17:37,536 INFO Creating socket tcp://0.0.0.0:39002
2026-04-06 21:17:37,536 INFO Closing socket tcp://0.0.0.0:39002
2026-04-06 21:17:37,539 INFO RPC interface 'supervisor' initialized
2026-04-06 21:17:37,540 INFO daemonizing the supervisord process
2026-04-06 21:17:37,541 INFO supervisord started with pid 1932460
2026-04-06 21:27:54,387 INFO Creating socket tcp://0.0.0.0:39002
2026-04-06 21:27:54,387 INFO Closing socket tcp://0.0.0.0:39002
2026-04-06 21:27:54,389 INFO RPC interface 'supervisor' initialized
2026-04-06 21:27:54,391 INFO daemonizing the supervisord process
2026-04-06 21:27:54,392 INFO supervisord started with pid 1932915
2026-04-06 21:32:12,704 WARN received SIGTERM indicating exit request
2026-04-06 21:32:29,061 INFO Creating socket tcp://0.0.0.0:39002
2026-04-06 21:32:29,061 INFO Closing socket tcp://0.0.0.0:39002
2026-04-06 21:32:29,063 INFO RPC interface 'supervisor' initialized
2026-04-06 21:32:29,066 INFO daemonizing the supervisord process
2026-04-06 21:32:29,067 INFO supervisord started with pid 1932975
2026-04-06 22:11:12,851 INFO Creating socket tcp://0.0.0.0:39002
2026-04-06 22:11:12,851 INFO Closing socket tcp://0.0.0.0:39002
2026-04-06 22:11:12,853 INFO RPC interface 'supervisor' initialized
2026-04-06 22:11:12,855 INFO daemonizing the supervisord process
2026-04-06 22:11:12,857 INFO supervisord started with pid 1935599
2026-04-07 03:07:20,109 INFO Creating socket tcp://0.0.0.0:39002
2026-04-07 03:07:20,109 INFO Closing socket tcp://0.0.0.0:39002
2026-04-07 03:07:20,111 INFO RPC interface 'supervisor' initialized
2026-04-07 03:07:20,113 INFO daemonizing the supervisord process
2026-04-07 03:07:20,114 INFO supervisord started with pid 1936350
2026-04-07 03:10:40,612 INFO spawned: 'database' with pid 1936425
2026-04-07 03:10:41,638 INFO success: database entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2026-04-07 09:49:51,674 INFO waiting for database to stop
2026-04-07 09:49:53,927 INFO waiting for database to stop
2026-04-07 09:49:56,290 INFO waiting for database to stop
2026-04-07 09:49:58,293 INFO waiting for database to stop
2026-04-07 09:50:00,296 INFO waiting for database to stop
2026-04-07 09:50:01,729 WARN killing 'database' (1936425) with SIGKILL
2026-04-07 09:50:01,861 WARN stopped: database (terminated by SIGKILL)
cryosparc@jyh3c:/home/jy_lab$ cryosparcm env | grep -v LICENSE_ID
export CONDA_DEFAULT_ENV=base
export CONDA_EXE=/home/jy_lab/anaconda3/bin/conda
export CONDA_PREFIX=/data3/cryosp_projs/cryosparc/cryosparc_master/.pixi/envs/master
export CONDA_PROMPT_MODIFIER='(base) '
export CONDA_PYTHON_EXE=/home/jy_lab/anaconda3/bin/python
export CONDA_SHLVL=1
export CRYOSPARC_BASE_PORT=39000
export CRYOSPARC_CLICK_WRAP=true
export CRYOSPARC_CLUSTER_JOB_MONITOR_INTERVAL=10
export CRYOSPARC_CLUSTER_JOB_MONITOR_MAX_RETRIES=1000000
export CRYOSPARC_DB_CONNECTION_TIMEOUT_MS=30000
export CRYOSPARC_DB_ENABLE_AUTH=true
export CRYOSPARC_DB_PATH=/data3/cryosp_projs/cryosparc/cryosparc_database
export CRYOSPARC_DEVELOP=false
export CRYOSPARC_FORCE_HOSTNAME=false
export CRYOSPARC_FORCE_USER=true
export CRYOSPARC_INSECURE=false
export CRYOSPARC_LICENSE_SERVER_ADDR=https://get.cryosparc.com
export CRYOSPARC_MASTER_HOSTNAME=jyh3c
export CRYOSPARC_PROJECT_DIR_PREFIX=CS-
export CRYOSPARC_ROOT_DIR=/data3/cryosp_projs/cryosparc/cryosparc_master
export CRYOSPARC_SSD_CACHE_LIFETIME_DAYS=5
export LD_PRELOAD=/data3/cryosp_projs/cryosparc/cryosparc_master/.pixi/envs/master/lib/libpython3.12.so
export PATH=/data3/cryosp_projs/cryosparc/cryosparc_master/deps/external/mongodb/bin:/data3/cryosp_projs/cryosparc/cryosparc_master/bin:/data3/cryosp_projs/cryosparc/cryosparc_master/.pixi/envs/master/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/data3/cryosp_projs/cryosparc/cryosparc_master/bin:/data3/cryosp_projs/cryosparc/cryosparc_worker/bin
export PYTHONPATH=/data3/cryosp_projs/cryosparc/cryosparc_master



Oh, I tried again, it just OK now.

I think this maybe cause by improper stop updating. Yesterday, I see the download is so slow, and I want to cancel the update, then, the restart not work. So, I have to re-update the cryosparc overnight. This morning I find it is OK.

1 Like