Could not get database status

When i try to restart cryosparcm,it shows some error message below, Does anyone have an idea what is going on.

Welcome to the forum @zcp.
Please can you post the text of the terminal output to enable better searches of the forum.

sorry for my mistake, I just realized i should post the text not the screenshot,

  File "/home/exx/Application/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/topology.py", line 211, in select_servers
    server_descriptions = self._select_servers_loop(selector, server_timeout, address)
  File "/home/exx/Application/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/topology.py", line 226, in _select_servers_loop
    raise ServerSelectionTimeoutError(
pymongo.errors.ServerSelectionTimeoutError: localhost:39001: [Errno 111] Connection refused, Timeout: 20.0s, Topology Description: <TopologyDescription id: 647f35f5e2ab4e31d76f144a, topology_type: Single, servers: [<ServerDescription ('localhost', 39001) server_type: Unknown, rtt: None, error=AutoReconnect('localhost:39001: [Errno 111] Connection refused')>]>
[2023-06-06T06:35:55-0700] Error configuring database. Most recent database log lines:
 mongod(wiredtiger_open+0x1BBA) [0x5646f0505c8a]
 mongod(_ZN5mongo18WiredTigerKVEngineC2ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8_PNS_11ClockSourceES8_mmbbbb+0x8D6) [0x5646f04d2cf6]
 mongod(+0xA25AEC) [0x5646f04b4aec]
 mongod(_ZN5mongo20ServiceContextMongoD29initializeGlobalStorageEngineEv+0x266) [0x5646f06c4fb6]
 mongod(+0xA025B8) [0x5646f04915b8]
 mongod(_ZN5mongo11mongoDbMainEiPPcS1_+0x26C) [0x5646f049463c]
 mongod(main+0x9) [0x5646f041abc9]
 libc.so.6(__libc_start_main+0xF5) [0x7f601dd32555]
 mongod(+0x9ED741) [0x5646f047c741]
-----  END BACKTRACE  -----
(base) [exx@c113963 cryosparc_database]$ cryosparcm status
----------------------------------------------------------------------------
CryoSPARC System master node installed at
/home/exx/Application/cryosparc/cryosparc_master
Current cryoSPARC version: v4.2.1+230427
----------------------------------------------------------------------------

CryoSPARC process status:

app                              STOPPED   Not started
app_api                          STOPPED   Not started
app_api_dev                      STOPPED   Not started
app_legacy                       STOPPED   Not started
app_legacy_dev                   STOPPED   Not started
command_core                     STOPPED   Not started
command_rtp                      STOPPED   Not started
command_vis                      STOPPED   Not started
database                         STOPPED   Not started

----------------------------------------------------------------------------
License is valid
----------------------------------------------------------------------------

global config variables:
export CRYOSPARC_LICENSE_ID="XXXXXXXXXXXXXX"
export CRYOSPARC_MASTER_HOSTNAME="c113963"
export CRYOSPARC_DB_PATH="/data/cryo/cryosparc_database"
export CRYOSPARC_BASE_PORT=39000
export CRYOSPARC_DB_CONNECTION_TIMEOUT_MS=20000
export CRYOSPARC_INSECURE=false
export CRYOSPARC_DB_ENABLE_AUTH=true
export CRYOSPARC_CLUSTER_JOB_MONITOR_INTERVAL=10
export CRYOSPARC_CLUSTER_JOB_MONITOR_MAX_RETRIES=1000000
export CRYOSPARC_PROJECT_DIR_PREFIX='CS-'
export CRYOSPARC_DEVELOP=false
export CRYOSPARC_CLICK_WRAP=true

Thanks for posting the info. (I have concealed your license id :wink: ).
The database seems to have thrown an error. Please can you post preceding lines from the database log
/home/exx/Application/cryosparc/cryosparc_master/run/database.log or
cryosparcm log database
such that the actual error that lead to the backtrace is shown.

Thank you for your quick reply. please check below.
CryoSPARC is running.
Stopping cryoSPARC
Shut down
Starting cryoSPARC System master process…
CryoSPARC is not already running.
configuring database
Warning: Could not get database status (attempt 1/3)
Warning: Could not get database status (attempt 2/3)
Warning: Could not get database status (attempt 3/3)
Traceback (most recent call last):
File “”, line 1, in
File “/home/exx/Application/cryosparc/cryosparc_master/cryosparc_compute/database_management.py”, line 48, in configure_mongo
initialize_replica_set()
File “/home/exx/Application/cryosparc/cryosparc_master/cryosparc_compute/database_management.py”, line 87, in initialize_replica_set
admin_db = try_get_pymongo_admin_db(mongo_client)
File “/home/exx/Application/cryosparc/cryosparc_master/cryosparc_compute/database_management.py”, line 249, in try_get_pymongo_admin_db
admin_db.command(({‘serverStatus’: 1}))
File “/home/exx/Application/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/database.py”, line 827, in command
with self.__client._socket_for_reads(read_preference, session) as (sock_info, secondary_ok):
File “/home/exx/Application/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/contextlib.py”, line 113, in enter
return next(self.gen)
File “/home/exx/Application/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/mongo_client.py”, line 1478, in _socket_for_reads
server = self._select_server(read_preference, session)
File “/home/exx/Application/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/mongo_client.py”, line 1436, in _select_server
server = topology.select_server(server_selector)
File “/home/exx/Application/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/topology.py”, line 250, in select_server
return random.choice(self.select_servers(selector, server_selection_timeout, address))
File “/home/exx/Application/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/topology.py”, line 211, in select_servers
server_descriptions = self._select_servers_loop(selector, server_timeout, address)
File “/home/exx/Application/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/topology.py”, line 226, in _select_servers_loop
raise ServerSelectionTimeoutError(
pymongo.errors.ServerSelectionTimeoutError: localhost:39001: [Errno 111] Connection refused, Timeout: 20.0s, Topology Description: <TopologyDescription id: 647f35f5e2ab4e31d76f144a, topology_type: Single, servers: [<ServerDescription (‘localhost’, 39001) server_type: Unknown, rtt: None, error=AutoReconnect(‘localhost:39001: [Errno 111] Connection refused’)>]>
[2023-06-06T06:35:55-0700] Error configuring database. Most recent database log lines:
mongod(wiredtiger_open+0x1BBA) [0x5646f0505c8a]
mongod(_ZN5mongo18WiredTigerKVEngineC2ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8_PNS_11ClockSourceES8_mmbbbb+0x8D6) [0x5646f04d2cf6]
mongod(+0xA25AEC) [0x5646f04b4aec]
mongod(_ZN5mongo20ServiceContextMongoD29initializeGlobalStorageEngineEv+0x266) [0x5646f06c4fb6]
mongod(+0xA025B8) [0x5646f04915b8]
mongod(ZN5mongo11mongoDbMainEiPPcS1+0x26C) [0x5646f049463c]
mongod(main+0x9) [0x5646f041abc9]
libc.so.6(__libc_start_main+0xF5) [0x7f601dd32555]
mongod(+0x9ED741) [0x5646f047c741]
----- END BACKTRACE -----

Please can you show lines from /home/exx/Application/cryosparc/cryosparc_master/run/database.log that precede these
quote=“zcp, post:5, topic:11436”]
Most recent database log lines:
[/quote]
You could use the command

less /home/exx/Application/cryosparc/cryosparc_master/run/database.log

to browse through the file (help).

something like this

(base) [exx@c113963 cryosparc_database]$ less /home/exx/Application/cryosparc/cryosparc_master/run/database.log
len:212 locks:{ Global: { acquireCount: { r: 2278 } }, Database: { acquireCount: { r: 1139 } }, oplog: { acquireCount: { r: 1139 } } } protocol:op_msg 223ms
2023-04-12T02:25:58.306-0700 I COMMAND  [conn17] command local.oplog.rs command: find { find: "oplog.rs", filter: { ns: /^(?:meteor\.|admin\.\$cmd)/, $or: [ { op: { $in: [ "i", "u", "d" ] } }, { op: "c", o.drop: { $exists: true } }, { op: "c", o.dropDatabase: 1 }, { op: "c", o.applyOps: { $exists: true } } ], ts: { $gt: Timestamp(1680971687, 1) } }, tailable: tr
ue, awaitData: true, lsid: { id: UUID("ca2a4901-7104-460f-8be3-2eb62b60f538") }, $clusterTime: { clusterTime: Timestamp(1681291548, 1), signature: { hash: BinData(0, D61E1771623AF09C9A63D87F8445AFDB9BADEB8A), keyId: 7198196954227015681 } }, $db: "local" } planSummary: COLLSCAN cursorid:12711181108 keysExamined:0 docsExamined:145725 numYields:1138 nreturned:0 res
len:212 locks:{ Global: { acquireCount: { r: 2278 } }, Database: { acquireCount: { r: 1139 } }, oplog: { acquireCount: { r: 1139 } } } protocol:op_msg 220ms
2023-04-12T02:26:28.564-0700 I COMMAND  [conn17] command local.oplog.rs command: find { find: "oplog.rs", filter: { ns: /^(?:meteor\.|admin\.\$cmd)/, $or: [ { op: { $in: [ "i", "u", "d" ] } }, { op: "c", o.drop: { $exists: true } }, { op: "c", o.dropDatabase: 1 }, { op: "c", o.applyOps: { $exists: true } } ], ts: { $gt: Timestamp(1680971687, 1) } }, tailable: tr
ue, awaitData: true, lsid: { id: UUID("ca2a4901-7104-460f-8be3-2eb62b60f538") }, $clusterTime: { clusterTime: Timestamp(1681291588, 1), signature: { hash: BinData(0, C5940F133A567B6627C588182958B02D26AC3133), keyId: 7198196954227015681 } }, $db: "local" } planSummary: COLLSCAN cursorid:10114576286 keysExamined:0 docsExamined:145729 numYields:1138 nreturned:0 res
len:212 locks:{ Global: { acquireCount: { r: 2278 } }, Database: { acquireCount: { r: 1139 } }, oplog: { acquireCount: { r: 1139 } } } protocol:op_msg 226ms
2023-04-12T02:26:58.814-0700 I COMMAND  [conn17] command local.oplog.rs command: find { find: "oplog.rs", filter: { ns: /^(?:meteor\.|admin\.\$cmd)/, $or: [ { op: { $in: [ "i", "u", "d" ] } }, { op: "c", o.drop: { $exists: true } }, { op: "c", o.dropDatabase: 1 }, { op: "c", o.applyOps: { $exists: true } } ], ts: { $gt: Timestamp(1680971687, 1) } }, tailable: tr
ue, awaitData: true, lsid: { id: UUID("ca2a4901-7104-460f-8be3-2eb62b60f538") }, $clusterTime: { clusterTime: Timestamp(1681291618, 1), signature: { hash: BinData(0, E31C5E7BAEF74C5DE7F992FEF31453B6861C2392), keyId: 7198196954227015681 } }, $db: "local" } planSummary: COLLSCAN cursorid:12267296768 keysExamined:0 docsExamined:145732 numYields:1138 nreturned:0 res
len:212 locks:{ Global: { acquireCount: { r: 2278 } }, Database: { acquireCount: { r: 1139 } }, oplog: { acquireCount: { r: 1139 } } } protocol:op_msg 218ms
2023-04-12T02:27:29.071-0700 I COMMAND  [conn17] command local.oplog.rs command: find { find: "oplog.rs", filter: { ns: /^(?:meteor\.|admin\.\$cmd)/, $or: [ { op: { $in: [ "i", "u", "d" ] } }, { op: "c", o.drop: { $exists: true } }, { op: "c", o.dropDatabase: 1 }, { op: "c", o.applyOps: { $exists: true } } ], ts: { $gt: Timestamp(1680971687, 1) } }, tailable: tr
ue, awaitData: true, lsid: { id: UUID("ca2a4901-7104-460f-8be3-2eb62b60f538") }, $clusterTime: { clusterTime: Timestamp(1681291648, 1), signature: { hash: BinData(0, EDD2FAE4AD6A7DCED2723998F5C5A822F5981B69), keyId: 7198196954227015681 } }, $db: "local" } planSummary: COLLSCAN cursorid:11829188813 keysExamined:0 docsExamined:145735 numYields:1138 nreturned:0 res
len:212 locks:{ Global: { acquireCount: { r: 2278 } }, Database: { acquireCount: { r: 1139 } }, oplog: { acquireCount: { r: 1139 } } } protocol:op_msg 225ms
2023-04-12T02:27:59.328-0700 I COMMAND  [conn17] command local.oplog.rs command: find { find: "oplog.rs", filter: { ns: /^(?:meteor\.|admin\.\$cmd)/, $or: [ { op: { $in: [ "i", "u", "d" ] } }, { op: "c", o.drop: { $exists: true } }, { op: "c", o.dropDatabase: 1 }, { op: "c", o.applyOps: { $exists: true } } ], ts: { $gt: Timestamp(1680971687, 1) } }, tailable: true, awaitData: true, lsid: { id: UUID("ca2a4901-7104-460f-8be3-2eb62b60f538") }, $clusterTime: { clusterTime: Timestamp(1681291678, 1), signature: { hash: BinData(0, 360AF9A82D6D3387295B1CF7323C87BD43B99158), keyId: 7198196954227015681 } }, $db: "local" } planSummary: COLLSCAN cursorid:9088264000 keysExamined:0 docsExamined:145740 numYields:1138 nreturned:0 reslen:212 locks:{ Global: { acquireCount: { r: 2278 } }, Database: { acquireCount: { r: 1139 } }, oplog: { acquireCount: { r: 1139 } } } protocol:op_msg 226ms
2023-04-12T02:28:28.672-0700 I COMMAND  [conn17] command local.oplog.rs command: find { find: "oplog.rs", filter: { ns: /^(?:meteor\.|admin\.\$cmd)/, $or: [ { op: { $in: [ "i", "u", "d" ] } }, { op: "c", o.drop: { $exists: true } }, { op: "c", o.dropDatabase: 1 }, { op: "c", o.applyOps: { $exists: true } } ], ts: { $gt: Timestamp(1680971687, 1) } }, tailable: true, awaitData: true, lsid: { id: UUID("ca2a4901-7104-460f-8be3-2eb62b60f538") }, $clusterTime: { clusterTime: Timestamp(1681291708, 1), signature: { hash: BinData(0, D443D54D821CBB6746F0F0885BABBF139882E157), keyId: 7198196954227015681 } }, $db: "local" } planSummary: COLLSCAN cursorid:9771984823 keysExamined:0 docsExamined:145743 numYields:1138 nreturned:0 reslen:212 locks:{ Global: { acquireCount: { r: 2278 } }, Database: { acquireCount: { r: 1139 } }, oplog: { acquireCount: { r: 1139 } } } protocol:op_msg 227ms
2023-04-12T02:28:58.941-0700 I COMMAND  [conn17] command local.oplog.rs command: find { find: "oplog.rs", filter: { ns: /^(?:meteor\.|admin\.\$cmd)/, $or: [ { op: { $in: [ "i", "u", "d" ] } }, { op: "c", o.drop: { $exists: true } }, { op: "c", o.dropDatabase: 1 }, { op: "c", o.applyOps: { $exists: true } } ], ts: { $gt: Timestamp(1680971687, 1) } }, tailable: true, awaitData: true, lsid: { id: UUID("ca2a4901-7104-460f-8be3-2eb62b60f538") }, $clusterTime: { clusterTime: Timestamp(1681291738, 1), signature: { hash: BinData(0, E058230B01BA19BC2416BE1F68E4F30C699AA007), keyId: 7198196954227015681 } }, $db: "local" } planSummary: COLLSCAN cursorid:9091106657 keysExamined:0 docsExamined:145746 numYields:1138 nreturned:0 reslen:212 locks:{ Global: { acquireCount: { r: 2278 } }, Database: { acquireCount: { r: 1139 } }, oplog: { acquireCount: { r: 1139 } } } protocol:op_msg 237ms
2023-04-12T02:29:29.183-0700 I COMMAND  [conn17] command local.oplog.rs command: find { find: "oplog.rs", filter: { ns: /^(?:meteor\.|admin\.\$cmd)/, $or: [ { op: { $in: [ "i", "u", "d" ] } }, { op: "c", o.drop: { $exists: true } }, { op: "c", o.dropDatabase: 1 }, { op: "c", o.applyOps: { $exists: true } } ], ts: { $gt: Timestamp(1680971687, 1) } }, tailable: true, awaitData: true, lsid: { id: UUID("ca2a4901-7104-460f-8be3-2eb62b60f538") }, $clusterTime: { clusterTime: Timestamp(1681291768, 1), signature: { hash: BinData(0, B3DCAB5C6412A90ECF6B370946931B3AE00124A2), keyId: 7198196954227015681 } }, $db: "local" } planSummary: COLLSCAN cursorid:9736282471 keysExamined:0 docsExamined:145749 numYields:1138 nreturned:0 reslen:212 locks:{ Global: { acquireCount: { r: 2278 } }, Database: { acquireCount: { r: 1139 } }, oplog: { acquireCount: { r: 1139 } } } protocol:op_msg 210ms
2023-04-12T02:29:59.441-0700 I COMMAND  [conn17] command local.oplog.rs command: find { find: "oplog.rs", filter: { ns: /^(?:meteor\.|admin\.\$cmd)/, $or: [ { op: { $in: [ "i", "u", "d" ] } }, { op: "c", o.drop: { $exists: true } }, { op: "c", o.dropDatabase: 1 }, { op: "c", o.applyOps: { $exists: true } } ], ts: { $gt: Timestamp(1680971687, 1) } }, tailable: true, awaitData: true, lsid: { id: UUID("ca2a4901-7104-460f-8be3-2eb62b60f538") }, $clusterTime: { clusterTime: Timestamp(1681291798, 1), signature: { hash: BinData(0, 54B8A54FDDAD17FE4CFF9DB72084A9254E72276F), keyId: 7198196954227015681 } }, $db: "local" } planSummary: COLLSCAN cursorid:11917625162 keysExamined:0 docsExamined:145752 numYields:1138 nreturned:0 reslen:212 locks:{ Global: { acquireCount: { r: 2278 } }, Database: { acquireCount: { r: 1139 } }, oplog: { acquireCount: { r: 1139 } } } protocol:op_msg 226ms
2023-04-12T02:30:29.700-0700 I COMMAND  [conn17] command local.oplog.rs command: find { find: "oplog.rs", filter: { ns: /^(?:meteor\.|admin\.\$cmd)/, $or: [ { op: { $in: [ "i", "u", "d" ] } }, { op: "c", o.drop: { $exists: true } }, { op: "c", o.dropDatabase: 1 }, { op: "c", o.applyOps: { $exists: true } } ], ts: { $gt: Timestamp(1680971687, 1) } }, tailable: true, awaitData: true, lsid: { id: UUID("ca2a4901-7104-460f-8be3-2eb62b60f538") }, $clusterTime: { clusterTime: Timestamp(1681291828, 1), signature: { hash: BinData(0, E4D30AA424925F80308D98CEBFC5BEA40982645A), keyId: 7198196954227015681 } }, $db: "local" } planSummary: COLLSCAN cursorid:9226602793 keysExamined:0 docsExamined:145755 numYields:1138 nreturned:0 reslen:212 locks:{ Global: { acquireCount: { r: 2278 } }, Database: { acquireCount: { r: 1139 } }, oplog: { acquireCount: { r: 1139 } } } protocol:op_msg 228ms
2023-04-12T02:30:59.957-0700 I COMMAND  [conn17] command local.oplog.rs command: find { find: "oplog.rs", filter: { ns: /^(?:meteor\.|admin\.\$cmd)/, $or: [ { op: { $in: [ "i", "u", "d" ] } }, { op: "c", o.drop: { $exists: true } }, { op: "c", o.dropDatabase: 1 }, { op: "c", o.applyOps: { $exists: true } } ], ts: { $gt: Timestamp(1680971687, 1) } }, tailable: true, awaitData: true, lsid: { id: UUID("ca2a4901-7104-460f-8be3-2eb62b60f538") }, $clusterTime: { clusterTime: Timestamp(1681291858, 1), signature: { hash: BinData(0, 450360C1F8DEEA2DD52C733076F2F71FC1AF9C1D), keyId: 7198196954227015681 } }, $db: "local" } planSummary: COLLSCAN cursorid:11860534969 keysExamined:0 docsExamined:145758 numYields:1138 nreturned:0 reslen:212 locks:{ Global: { acquireCount: { r: 2278 } }, Database: { acquireCount: { r: 1139 } }, oplog: { acquireCount: { r: 1139 } } } protocol:op_msg 226ms
2023-04-12T02:31:29.328-0700 I COMMAND  [conn17] command local.oplog.rs command: find { find: "oplog.rs", filter: { ns: /^(?:meteor\.|admin\.\$cmd)/, $or: [ { op: { $in: [ "i", "u", "d" ] } }, { op: "c", o.drop: { $exists: true } }, { op: "c", o.dropDatabase: 1 }, { op: "c", o.applyOps: { $exists: true } } ], ts: { $gt: Timestamp(1680971687, 1) } }, tailable: true, awaitData: true, lsid: { id: UUID("ca2a4901-7104-460f-8be3-2eb62b60f538") }, $clusterTime: { clusterTime: Timestamp(1681291888, 1), signature: { hash: BinData(0, AC4F3107734E62834B3460D56F12B61797FC7843), keyId: 7198196954227015681 } }, $db: "local" } planSummary: COLLSCAN cursorid:9016901465 keysExamined:0 docsExamined:145761 numYields:1138 nreturned:0 reslen:212 locks:{ Global: { acquireCount: { r: 2278 } }, Database: { acquireCount: { r: 1139 } }, oplog: { acquireCount: { r: 1139 } } } protocol:op_msg 268ms
2023-04-12T02:31:59.584-0700 I COMMAND  [conn17] command local.oplog.rs command: find { find: "oplog.rs", filter: { ns: /^(?:meteor\.|admin\.\$cmd)/, $or: [ { op: { $in: [ "i", "u", "d" ] } }, { op: "c", o.drop: { $exists: true } }, { op: "c", o.dropDatabase: 1 }, { op: "c", o.applyOps: { $exists: true } } ], ts: { $gt: Timestamp(1680971687, 1) } }, tailable: true, awaitData: true, lsid: { id: UUID("ca2a4901-7104-460f-8be3-2eb62b60f538") }, $clusterTime: { clusterTime: Timestamp(1681291918, 1), signature: { hash: BinData(0, A54FDE2EF53884A118F0035E92F38898BBC9165D), keyId: 7198196954227015681 } }, $db: "local" } planSummary: COLLSCAN cursorid:9044117994 keysExamined:0 docsExamined:145764 numYields:1138 nreturned:0 reslen:212 locks:{ Global: { acquireCount: { r: 2278 } }, Database: { acquireCount: { r: 1139 } }, oplog: { acquireCount: { r: 1139 } } } protocol:op_msg 226ms
2023-04-12T02:32:29.843-0700 I COMMAND  [conn17] command local.oplog.rs command: find { find: "oplog.rs", filter: { ns: /^(?:meteor\.|admin\.\$cmd)/, $or: [ { op: { $in: [ "i", "u", "d" ] } }, { op: "c", o.drop: { $exists: true } }, { op: "c", o.dropDatabase: 1 }, { op: "c", o.applyOps: { $exists: true } } ], ts: { $gt: Timestamp(1680971687, 1) } }, tailable: true, awaitData: true, lsid: { id: UUID("ca2a4901-7104-460f-8be3-2eb62b60f538") }, $clusterTime: { clusterTime: Timestamp(1681291948, 1), signature: { hash: BinData(0, F6FECF2534AB6C12A152E3F0CB2087FEC9F004EB), keyId: 7198196954227015681 } }, $db: "local" } planSummary: COLLSCAN cursorid:11410239520 keysExamined:0 docsExamined:145767 numYields:1138 nreturned:0 reslen:212 locks:{ Global: { acquireCount: { r: 2278 } }, Database: { acquireCount: { r: 1139 } }, oplog: { acquireCount: { r: 1139 } } } protocol:op_msg 228ms
2023-04-12T02:33:00.092-0700 I COMMAND  [conn17] command local.oplog.rs command: find { find: "oplog.rs", filter: { ns: /^(?:meteor\.|admin\.\$cmd)/, $or: [ { op: { $in: [ "i", "u", "d" ] } }, { op: "c", o.drop: { $exists: true } }, { op: "c", o.dropDatabase: 1 }, { op: "c", o.applyOps: { $exists: true } } ], ts: { $gt: Timestamp(1680971687, 1) } }, tailable: true, awaitData: true, lsid: { id: UUID("ca2a4901-7104-460f-8be3-2eb62b60f538") }, $clusterTime: { clusterTime: Timestamp(1681291978, 1), signature: { hash: BinData(0, C79845FD365E3B81340F48FAFDAFF4E8755C942C), keyId: 7198196954227015681 } }, $db: "local" } planSummary: COLLSCAN cursorid:12374708020 keysExamined:0 docsExamined:145773 numYields:1138 nreturned:0 reslen:212 locks:{ Global: { acquireCount: { r: 2278 } }, Database: { acquireCount: { r: 1139 } }, oplog: { acquireCount: { r: 1139 } } } protocol:op_msg 217ms

The timestamps suggest that these are entries from the top of the log.
The interesting lines should be near the bottom. Please see here for help navigating the file.

63 6b 70 6f 69 6e 74 5f 6c 73 6e 3d 28 34 32 39 34 39 36 37 32 39 35 2c 32 31 34 37 34 38 33 36 34 37 29 00 a9 66 69 6c 65 3a 63 6f 6c 6c 65 63 74 69 6f 6e 2d 35
2023-06-06T06:34:42.895-0700 E STORAGE [initandlisten] WiredTiger error (-31802) [1686058482:895436] [174382:0x7f601f38ba40], file:WiredTiger.wt, connection: __wt_block_read_off, 302: WiredTiger.wt: fat al read error: WT_ERROR: non-specific WiredTiger error Raw: [1686058482:895436][174382:0x7f601f38ba40 ], file:WiredTiger.wt, connection: __wt_block_read_off, 302: WiredTiger.wt: fatal read error: WT_ERRO R: non-specific WiredTiger error
2023-06-06T06:34:42.895-0700 E STORAGE [initandlisten] WiredTiger error (-31804) [1686058482:895446] [174382:0x7f601f38ba40], file:WiredTiger.wt, connection: __wt_panic, 523: the process must exit and r estart: WT_PANIC: WiredTiger library panic Raw: [1686058482:895446][174382:0x7f601f38ba40], file:Wire dTiger.wt, connection: __wt_panic, 523: the process must exit and restart: WT_PANIC: WiredTiger libra ry panic
2023-06-06T06:34:42.895-0700 F - [initandlisten] Fatal Assertion 50853 at src/mongo/db/storage /wiredtiger/wiredtiger_util.cpp 420
2023-06-06T06:34:42.895-0700 F - [initandlisten] \n\n***aborting after fassert() failure\n\n
2023-06-06T06:34:42.916-0700 F - [initandlisten] Got signal: 6 (Aborted).

0x5646f1d34f21 0x5646f1d34139 0x5646f1d3461d 0x7f601e0ed630 0x7f601dd46387 0x7f601dd47a78 0x5646f041 8dec 0x5646f04f3d76 0x5646f0565ad1 0x5646f03b5a94 0x5646f03b5eb4 0x5646f061c3b5 0x5646f063ad6f 0x5646 f063b2cb 0x5646f06376b5 0x5646f061ba57 0x5646f058dd5e 0x5646f05073f2 0x5646f05643bd 0x5646f056499d 0x 5646f0564c2c 0x5646f05d6e52 0x5646f055fec8 0x5646f052697e 0x5646f0526a5b 0x5646f0505c8a 0x5646f04d2cf 6 0x5646f04b4aec 0x5646f06c4fb6 0x5646f04915b8 0x5646f049463c 0x5646f041abc9 0x7f601dd32555 0x5646f04 7c741
----- BEGIN BACKTRACE -----
{“backtrace”:[{“b”:“5646EFA8F000”,“o”:“22A5F21”,“s”:“_ZN5mongo15printStackTraceERSo”},{“b”:“5646EFA8F 000”,“o”:“22A5139”},{“b”:“5646EFA8F000”,“o”:“22A561D”},{“b”:“7F601E0DE000”,“o”:“F630”},{“b”:“7F601DD1 0000”,“o”:“36387”,“s”:“gsignal”},{“b”:“7F601DD10000”,“o”:“37A78”,“s”:“abort”},{“b”:“5646EFA8F000”,“o” :“989DEC”,“s”:“_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj”},{“b”:“5646EFA8F000”,“o”:“A64D76”}, {“b”:“5646EFA8F000”,“o”:“AD6AD1”},{“b”:“5646EFA8F000”,“o”:“926A94”,“s”:“__wt_err_func”},{“b”:“5646EFA 8F000”,“o”:“926EB4”,“s”:“__wt_panic”},{“b”:“5646EFA8F000”,“o”:“B8D3B5”,“s”:“__wt_block_read_off”},{“b “:“5646EFA8F000”,“o”:“BABD6F”,“s”:“wt_block_extlist_read"},{“b”:“5646EFA8F000”,“o”:“BAC2CB”,“s”:" wt_block_extlist_read_avail”},{“b”:“5646EFA8F000”,“o”:“BA86B5”,“s”:”__wt_block_checkpoint_load”},{“b” :“5646EFA8F000”,“o”:“B8CA57”},{“b”:“5646EFA8F000”,“o”:“AFED5E”,“s”:“__wt_btree_open”},{“b”:“5646EFA8F 000”,“o”:“A783F2”,“s”:“__wt_conn_dhandle_open”},{“b”:“5646EFA8F000”,“o”:“AD53BD”,“s”:“__wt_session_ge t_dhandle”},{“b”:“5646EFA8F000”,“o”:“AD599D”,“s”:“__wt_session_get_dhandle”},{“b”:“5646EFA8F000”,“o”: “AD5C2C”,“s”:“__wt_session_get_btree_ckpt”},{“b”:“5646EFA8F000”,“o”:“B47E52”,“s”:“__wt_curfile_open”} ,{“b”:“5646EFA8F000”,“o”:“AD0EC8”},{“b”:“5646EFA8F000”,“o”:“A9797E”,“s”:“__wt_metadata_cursor_open”}, {“b”:“5646EFA8F000”,“o”:“A97A5B”,“s”:“__wt_metadata_cursor”},{“b”:“5646EFA8F000”,“o”:“A76C8A”,“s”:“wi redtiger_open”},{“b”:“5646EFA8F000”,“o”:“A43CF6”,“s”:“_ZN5mongo18WiredTigerKVEngineC2ERKNSt7__cxx1112 basic_stringIcSt11char_traitsIcESaIcEEES8_PNS_11ClockSourceES8_mmbbbb”},{“b”:“5646EFA8F000”,“o”:“A25A EC”},{“b”:“5646EFA8F000”,“o”:“C35FB6”,“s”:“_ZN5mongo20ServiceContextMongoD29initializeGlobalStorageEn gineEv”},{“b”:“5646EFA8F000”,“o”:“A025B8”},{“b”:“5646EFA8F000”,“o”:“A0563C”,“s”:“ZN5mongo11mongoDbMa inEiPPcS1”},{“b”:“5646EFA8F000”,“o”:“98BBC9”,“s”:“main”},{“b”:“7F601DD10000”,“o”:“22555”,“s”:“__libc _start_main”},{“b”:“5646EFA8F000”,“o”:“9ED741”}],“processInfo”:{ “mongodbVersion” : “3.6.23”, “gitVer sion” : “d352e6a4764659e0d0350ce77279de3c1f243e5c”, “compiledModules” : [], “uname” : { “sysname” : " Linux", “release” : “3.10.0-1160.76.1.el7.x86_64”, “version” : “#1 SMP Wed Aug 10 16:21:17 UTC 2022”, “machine” : “x86_64” }, “somap” : [ { “b” : “5646EFA8F000”, “elfType” : 3, “buildId” : “B0818C001F2B 63D4533D208F68F08AE2A599CA9E” }, { “b” : “7FFD008F7000”, “elfType” : 3, “buildId” : “B1BF040D7996B63F 7922CE921F4B40C7E4688762” }, { “b” : “7F601EE38000”, “path” : “/home/exx/Application/cryosparc/cryosp arc_master/deps/anaconda/envs/cryosparc_master_env/lib/libpython3.8.so”, “elfType” : 3 }, { “b” : “7F 601EC1E000”, “path” : “/lib64/libresolv.so.2”, “elfType” : 3, “buildId” : “E0CD0DD5466E6B9E5FB10BFAFF 13B1BB50F08EAA” }, { “b” : “7F601EA16000”, “path” : “/lib64/librt.so.1”, “elfType” : 3, “buildId” : " 3E44DF7055942478D052E40FDD1F5B7862B152B0" }, { “b” : “7F601E812000”, “path” : “/lib64/libdl.so.2”, “e lfType” : 3, “buildId” : “7F2E9CB0769D7E57BD669B485A74B537B63A57C4” }, { “b” : “7F601E510000”, “path” : “/lib64/libm.so.6”, “elfType” : 3, “buildId” : “7615604EAF4A068DFAE5085444D15C0DEE93DFBD” }, { “b” : “7F601E2FA000”, “path” : “/lib64/libgcc_s.so.1”, “elfType” : 3, “buildId” : “EDF51350C7F71496149D0 64AA8B1441F786DF88A” }, { “b” : “7F601E0DE000”, “path” : “/lib64/libpthread.so.0”, “elfType” : 3, “bu ildId” : “E10CC8F2B932FC3DAEDA22F8DAC5EBB969524E5B” }, { “b” : “7F601DD10000”, “path” : “/lib64/libc. so.6”, “elfType” : 3, “buildId” : “9470E279388F7F9CB2ED3B2872D0C2095B191FF4” }, { “b” : "7F601F192000 ", “path” : “/lib64/ld-linux-x86-64.so.2”, “elfType” : 3, “buildId” : “62C449974331341BB08DCCE3859560 A22AF1E172” }, { “b” : “7F601DB0D000”, “path” : “/lib64/libutil.so.1”, “elfType” : 3, “buildId” : “FF 2196BD22A8443054C83031E0E76EB01BA1219C” } ] }}
mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x5646f1d34f21]
mongod(+0x22A5139) [0x5646f1d34139]
mongod(+0x22A561D) [0x5646f1d3461d]
libpthread.so.0(+0xF630) [0x7f601e0ed630]
libc.so.6(gsignal+0x37) [0x7f601dd46387]
libc.so.6(abort+0x148) [0x7f601dd47a78]
mongod(_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj+0x0) [0x5646f0418dec]
mongod(+0xA64D76) [0x5646f04f3d76]
mongod(+0xAD6AD1) [0x5646f0565ad1]
mongod(__wt_err_func+0x90) [0x5646f03b5a94]
mongod(__wt_panic+0x3F) [0x5646f03b5eb4]
mongod(__wt_block_read_off+0x585) [0x5646f061c3b5]
mongod(__wt_block_extlist_read+0x8F) [0x5646f063ad6f]
mongod(__wt_block_extlist_read_avail+0x2B) [0x5646f063b2cb]
mongod(__wt_block_checkpoint_load+0x275) [0x5646f06376b5]
mongod(+0xB8CA57) [0x5646f061ba57]
mongod(__wt_btree_open+0xD7E) [0x5646f058dd5e]
mongod(__wt_conn_dhandle_open+0x352) [0x5646f05073f2]
mongod(__wt_session_get_dhandle+0xED) [0x5646f05643bd]
mongod(__wt_session_get_dhandle+0x6CD) [0x5646f056499d]
mongod(__wt_session_get_btree_ckpt+0x14C) [0x5646f0564c2c]
mongod(__wt_curfile_open+0x52) [0x5646f05d6e52]
mongod(+0xAD0EC8) [0x5646f055fec8]
mongod(__wt_metadata_cursor_open+0x6E) [0x5646f052697e]
mongod(__wt_metadata_cursor+0x4B) [0x5646f0526a5b]
mongod(wiredtiger_open+0x1BBA) [0x5646f0505c8a]
mongod(ZN5mongo18WiredTigerKVEngineC2ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8_PNS 11ClockSourceES8_mmbbbb+0x8D6) [0x5646f04d2cf6]
mongod(+0xA25AEC) [0x5646f04b4aec]
mongod(_ZN5mongo20ServiceContextMongoD29initializeGlobalStorageEngineEv+0x266) [0x5646f06c4fb6]
mongod(+0xA025B8) [0x5646f04915b8]
mongod(ZN5mongo11mongoDbMainEiPPcS1+0x26C) [0x5646f049463c]
mongod(main+0x9) [0x5646f041abc9]
libc.so.6(__libc_start_main+0xF5) [0x7f601dd32555]
mongod(+0x9ED741) [0x5646f047c741]
----- END BACKTRACE -----

Do I need to reinstall the cryosparc if it can not get fixed?

A re-installation is probably not required.
Disclaimer: I am not a MongoDB/WiredTiger expert, and may misinterpret the error messages you posted.
I suspect the database is corrupt and recommend:

  • trying to understand and eliminate the underlying cause of the suspected corruption
  • selecting a recovery strategy

Answers to these questions may help:

  1. What is the output of the command
    stat -f /data/cryo/cryosparc_database
  2. How old is the most recent database backup?
  3. How many CryoSPARC projects have been worked on since the most recent database backup?
  4. Have there been any unexpected system shutdowns?

Thank you for your help and reply. after i run this command you mention, it give some information like this below:
File: “/data/cryo/cryosparc_database”
ID: 97f00000000 Namelen: 255 Type: xfs
Block size: 4096 Fundamental block size: 4096
Blocks: Total: 11251476992 Free: 320035080 Available: 320035080
Inodes: Total: 2562538112 Free: 2560476322
However, I have no idea how to check the most recent database backup. Could you give some suggestions?

Database backups would have been implemented by whoever managed or manages this CryoSPARC instance. It is possible that there are no backups.
Under the assumption there are no backups, I recommend (run commands on the CryoSPARC master node under the Linux account that “owns” your CryoSPARC installation):

  1. Shut down CryoSPARC
    cryosparcm stop

  2. Ensure there are no remaining CryoSPARC processes:
    ps x | grep -e cryosparc -e mongo

  3. Assuming the existing database path /data/cryo/cryosparc_database, set aside the existing database directory:

    mv /data/cryo/cryosparc_database /data/cryo/cryosparc_database.20230607
    
  4. Start CryoSPARC
    cryosparcm start
    This command should create a blank database. You will have to:

  • create CryoSPARC login(s) (how?).
  • connect worker(s). Is this a “standalone” (workstation-type) CryoSPARC instance? What is the path to the particle scratch space, if there is any?
  1. Configure database backups.
  2. Regularly monitor available space for the database.
  3. Attach project directories. Caution: Deletion of cs.lock files from CryoSPARC project directories is strongly discouraged under most circumstances. This particular recovery scenario is an exception. During the creation of the new database, a new instance ID was created, which may be incompatible with the cs.lock that were compatible with the old, presumably corrupted database. Before attaching each project directory to the CryoSPARC instance with an new database, delete the cs.lock file in the project directory. I repeat: cs.lock files should only be deleted under exceptional circumstances.