Hi wtempel,
thanks in advance. bye the way, it’s centos9 stream OS.
here’s the outputs:
nothing after
$ ps -weo pid,ppid,start,cmd | grep -e cryosparc -e mongo | grep -v grep
$
nothing after
$ ls -l /tmp/.sock
ls: cannot access '/tmp/.sock’: No such file or directory
$ cryosparcm log database | grep “^2023-12-12T14:52:4”
2023-12-12T14:52:46.916+0800 I CONTROL [initandlisten] MongoDB starting : pid=15621 port=39001 dbpath=/home/patrick/Downloads/cryosparc/cryosparc_database 64-bit host=gpunode3
2023-12-12T14:52:46.916+0800 I CONTROL [initandlisten] db version v3.6.23
2023-12-12T14:52:46.916+0800 I CONTROL [initandlisten] git version: d352e6a4764659e0d0350ce77279de3c1f243e5c
2023-12-12T14:52:46.916+0800 I CONTROL [initandlisten] allocator: tcmalloc
2023-12-12T14:52:46.916+0800 I CONTROL [initandlisten] modules: none
2023-12-12T14:52:46.916+0800 I CONTROL [initandlisten] build environment:
2023-12-12T14:52:46.916+0800 I CONTROL [initandlisten] distarch: x86_64
2023-12-12T14:52:46.916+0800 I CONTROL [initandlisten] target_arch: x86_64
2023-12-12T14:52:46.916+0800 I CONTROL [initandlisten] options: { net: { port: 39001 }, replication: { oplogSizeMB: 64, replSet: “meteor” }, storage: { dbPath: “/home/patrick/Downloads/cryosparc/cryosparc_database” } }
2023-12-12T14:52:46.916+0800 I - [initandlisten] Detected data files in /home/patrick/Downloads/cryosparc/cryosparc_database created by the ‘wiredTiger’ storage engine, so setting the active storage engine to ‘wiredTiger’.
2023-12-12T14:52:46.916+0800 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=128011M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),compatibility=(release=“3.0”,require_max=“3.0”),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2023-12-12T14:52:47.401+0800 I STORAGE [initandlisten] WiredTiger message [1702363967:401016][15621:0x7f3a5befba40], txn-recover: Main recovery loop: starting at 26/6784
2023-12-12T14:52:47.476+0800 I STORAGE [initandlisten] WiredTiger message [1702363967:476289][15621:0x7f3a5befba40], txn-recover: Recovering log 26 through 27
2023-12-12T14:52:47.518+0800 I STORAGE [initandlisten] WiredTiger message [1702363967:518859][15621:0x7f3a5befba40], txn-recover: Recovering log 27 through 27
2023-12-12T14:52:47.555+0800 I STORAGE [initandlisten] WiredTiger message [1702363967:555691][15621:0x7f3a5befba40], txn-recover: Set global recovery timestamp: 0
2023-12-12T14:52:47.561+0800 I STORAGE [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.rs
2023-12-12T14:52:47.561+0800 I STORAGE [initandlisten] The size storer reports that the oplog contains 112953 records totaling to 66583448 bytes
2023-12-12T14:52:47.561+0800 I STORAGE [initandlisten] Sampling the oplog to determine where to place markers for truncation
2023-12-12T14:52:47.563+0800 I STORAGE [initandlisten] Sampling from the oplog between Nov 30 11:38:05:2 and Dec 12 14:36:58:2 to determine where to place markers for truncation
2023-12-12T14:52:47.563+0800 I STORAGE [initandlisten] Taking 99 samples and assuming that each section of oplog contains approximately 11385 records totaling to 6711221 bytes
2023-12-12T14:52:47.586+0800 I STORAGE [initandlisten] Placing a marker at optime Nov 30 11:40:02:19
2023-12-12T14:52:47.586+0800 I STORAGE [initandlisten] Placing a marker at optime Nov 30 11:44:49:5
2023-12-12T14:52:47.586+0800 I STORAGE [initandlisten] Placing a marker at optime Nov 30 11:49:36:16
2023-12-12T14:52:47.586+0800 I STORAGE [initandlisten] Placing a marker at optime Nov 30 12:05:49:4
2023-12-12T14:52:47.586+0800 I STORAGE [initandlisten] Placing a marker at optime Nov 30 12:11:19:1
2023-12-12T14:52:47.586+0800 I STORAGE [initandlisten] Placing a marker at optime Dec 2 01:47:49:1
2023-12-12T14:52:47.586+0800 I STORAGE [initandlisten] Placing a marker at optime Dec 4 11:26:10:1
2023-12-12T14:52:47.586+0800 I STORAGE [initandlisten] Placing a marker at optime Dec 6 18:49:45:1
2023-12-12T14:52:47.586+0800 I STORAGE [initandlisten] Placing a marker at optime Dec 9 07:36:41:1
2023-12-12T14:52:47.586+0800 I STORAGE [initandlisten] WiredTiger record store oplog processing took 24ms
2023-12-12T14:52:47.586+0800 I CONTROL [initandlisten]
2023-12-12T14:52:47.586+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2023-12-12T14:52:47.587+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2023-12-12T14:52:47.587+0800 I CONTROL [initandlisten]
2023-12-12T14:52:47.587+0800 I CONTROL [initandlisten] ** WARNING: This server is bound to localhost.
2023-12-12T14:52:47.587+0800 I CONTROL [initandlisten] ** Remote systems will be unable to connect to this server.
2023-12-12T14:52:47.587+0800 I CONTROL [initandlisten] ** Start the server with --bind_ip to specify which IP
2023-12-12T14:52:47.587+0800 I CONTROL [initandlisten] ** addresses it should serve responses from, or with --bind_ip_all to
2023-12-12T14:52:47.587+0800 I CONTROL [initandlisten] ** bind to all interfaces. If this behavior is desired, start the
2023-12-12T14:52:47.587+0800 I CONTROL [initandlisten] ** server with --bind_ip 127.0.0.1 to disable this warning.
2023-12-12T14:52:47.587+0800 I CONTROL [initandlisten]
2023-12-12T14:52:47.587+0800 I CONTROL [initandlisten]
2023-12-12T14:52:47.587+0800 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine.
2023-12-12T14:52:47.587+0800 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems:
2023-12-12T14:52:47.587+0800 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options]
2023-12-12T14:52:47.587+0800 I CONTROL [initandlisten]
2023-12-12T14:52:47.587+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is ‘always’.
2023-12-12T14:52:47.587+0800 I CONTROL [initandlisten] ** We suggest setting it to ‘never’
2023-12-12T14:52:47.587+0800 I CONTROL [initandlisten]
2023-12-12T14:52:47.593+0800 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory ‘/home/patrick/Downloads/cryosparc/cryosparc_database/diagnostic.data’
2023-12-12T14:52:47.595+0800 I REPL [initandlisten] Rollback ID is 1
2023-12-12T14:52:47.596+0800 I REPL [initandlisten] No oplog entries to apply for recovery. appliedThrough and checkpointTimestamp are both null.
2023-12-12T14:52:47.596+0800 I NETWORK [initandlisten] listening via socket bound to 127.0.0.1
2023-12-12T14:52:47.596+0800 I NETWORK [initandlisten] listening via socket bound to /tmp/mongodb-39001.sock
2023-12-12T14:52:47.596+0800 I NETWORK [initandlisten] waiting for connections on port 39001
2023-12-12T14:52:47.596+0800 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2023-12-12T14:52:47.596+0800 I REPL [replexec-0] New replica set config in use: { _id: “meteor”, version: 1, protocolVersion: 1, members: [ { _id: 0, host: “localhost:39001”, arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId(‘655c14be8835105e6df90515’) } }
2023-12-12T14:52:47.596+0800 I REPL [replexec-0] This node is localhost:39001 in the config
2023-12-12T14:52:47.596+0800 I REPL [replexec-0] transition to STARTUP2 from STARTUP
2023-12-12T14:52:47.596+0800 I REPL [replexec-0] Starting replication storage threads
2023-12-12T14:52:47.597+0800 I REPL [replexec-0] transition to RECOVERING from STARTUP2
2023-12-12T14:52:47.597+0800 I REPL [replexec-0] Starting replication fetcher thread
2023-12-12T14:52:47.597+0800 I REPL [replexec-0] Starting replication applier thread
2023-12-12T14:52:47.597+0800 I REPL [replexec-0] Starting replication reporter thread
2023-12-12T14:52:47.598+0800 I REPL [rsSync] transition to SECONDARY from RECOVERING
2023-12-12T14:52:47.598+0800 I REPL [rsSync] conducting a dry run election to see if we could be elected. current term: 18
2023-12-12T14:52:47.598+0800 I REPL [replexec-0] dry election run succeeded, running for election in term 19
2023-12-12T14:52:47.598+0800 I REPL [replexec-0] election succeeded, assuming primary role in term 19
2023-12-12T14:52:47.598+0800 I REPL [replexec-0] transition to PRIMARY from SECONDARY
2023-12-12T14:52:47.598+0800 I REPL [replexec-0] Resetting sync source to empty, which was :27017
2023-12-12T14:52:47.598+0800 I REPL [replexec-0] Entering primary catch-up mode.
2023-12-12T14:52:47.598+0800 I REPL [replexec-0] Exited primary catch-up mode.
2023-12-12T14:52:49.599+0800 I REPL [rsSync] transition to primary complete; database writes are now permitted
2023-12-12T14:52:49.891+0800 I NETWORK [listener] connection accepted from 127.0.0.1:48772 #1 (1 connection now open)
2023-12-12T14:52:49.891+0800 I NETWORK [conn1] received client metadata from 127.0.0.1:48772 conn1: { driver: { name: “PyMongo”, version: “3.13.0” }, os: { type: “Linux”, name: “Linux”, architecture: “x86_64”, version: “5.14.0-390.el9.x86_64” }, platform: “CPython 3.8.17.final.0” }
2023-12-12T14:52:49.892+0800 I NETWORK [listener] connection accepted from 127.0.0.1:48788 #2 (2 connections now open)
2023-12-12T14:52:49.892+0800 I NETWORK [conn2] received client metadata from 127.0.0.1:48788 conn2: { driver: { name: “PyMongo”, version: “3.13.0” }, os: { type: “Linux”, name: “Linux”, architecture: “x86_64”, version: “5.14.0-390.el9.x86_64” }, platform: “CPython 3.8.17.final.0” }
2023-12-12T14:52:49.911+0800 I NETWORK [conn2] end connection 127.0.0.1:48788 (1 connection now open)
2023-12-12T14:52:49.911+0800 I NETWORK [conn1] end connection 127.0.0.1:48772 (0 connections now open)
2023-12-12T14:52:49.913+0800 I NETWORK [listener] connection accepted from 127.0.0.1:48804 #3 (1 connection now open)
2023-12-12T14:52:49.913+0800 I NETWORK [conn3] received client metadata from 127.0.0.1:48804 conn3: { driver: { name: “PyMongo”, version: “3.13.0” }, os: { type: “Linux”, name: “Linux”, architecture: “x86_64”, version: “5.14.0-390.el9.x86_64” }, platform: “CPython 3.8.17.final.0” }
2023-12-12T14:52:49.913+0800 I NETWORK [listener] connection accepted from 127.0.0.1:48808 #4 (2 connections now open)
2023-12-12T14:52:49.913+0800 I NETWORK [conn4] received client metadata from 127.0.0.1:48808 conn4: { driver: { name: “PyMongo”, version: “3.13.0” }, os: { type: “Linux”, name: “Linux”, architecture: “x86_64”, version: “5.14.0-390.el9.x86_64” }, platform: “CPython 3.8.17.final.0” }
2023-12-12T14:52:49.933+0800 I NETWORK [conn4] end connection 127.0.0.1:48808 (1 connection now open)
2023-12-12T14:52:49.933+0800 I NETWORK [conn3] end connection 127.0.0.1:48804 (0 connections now open)
2023-12-12T14:52:49.933+0800 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2023-12-12T14:52:49.933+0800 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets…
2023-12-12T14:52:49.933+0800 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-39001.sock
2023-12-12T14:52:49.934+0800 I REPL [signalProcessingThread] shutting down replication subsystems
2023-12-12T14:52:49.934+0800 I REPL [signalProcessingThread] Stopping replication reporter thread
2023-12-12T14:52:49.934+0800 I REPL [signalProcessingThread] Stopping replication fetcher thread
2023-12-12T14:52:49.934+0800 I REPL [signalProcessingThread] Stopping replication applier thread