Database not starting after disk full

Hi,

As mentioned in a separate thread, one of our systems had a completely full disk, and after clearing some space and restarting cryosparc, I get this error:

user@ubuntu:~/processing$ cryosparcm restart
CryoSPARC is running.
Stopping cryoSPARC
Shut down
Starting cryoSPARC System master process..
CryoSPARC is not already running.
configuring database
    configuration complete
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/user/software/cryosparc/cryosparc2_master/cryosparc_compute/database_management.py", line 75, in configure_mongo
    mongod.wait(wait_seconds)
  File "/home/user/software/cryosparc/cryosparc2_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/subprocess.py", line 1083, in wait
    return self._wait(timeout=timeout)
  File "/home/user/software/cryosparc/cryosparc2_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/subprocess.py", line 1814, in _wait
    raise TimeoutExpired(self.args, timeout)
subprocess.TimeoutExpired: Command '['mongod', '--dbpath', '/ntfs_mount/ubuntu/cryosparc2_database/', '--port', '39001', '--oplogSize', '64', '--replSet', 'meteor']' timed out after 20.0 seconds
[2024-02-20T09:01:47-05:00] Error configuring database. Most recent database log lines:
2024-02-20T09:01:27.837-0500 I NETWORK  [signalProcessingThread] shutdown: going to close listening sockets...
2024-02-20T09:01:27.837-0500 I NETWORK  [signalProcessingThread] removing socket file: /tmp/mongodb-39001.sock
2024-02-20T09:01:27.837-0500 I REPL     [signalProcessingThread] shutting down replication subsystems
2024-02-20T09:01:27.837-0500 I REPL     [signalProcessingThread] Stopping replication reporter thread
2024-02-20T09:01:27.838-0500 I REPL     [signalProcessingThread] Stopping replication fetcher thread
2024-02-20T09:01:27.838-0500 I REPL     [signalProcessingThread] Stopping replication applier thread
2024-02-20T09:01:28.330-0500 I REPL     [signalProcessingThread] Stopping replication storage threads
2024-02-20T09:01:28.331-0500 I FTDC     [signalProcessingThread] Shutting down full-time diagnostic data capture
2024-02-20T09:01:28.335-0500 I STORAGE  [WTOplogJournalThread] oplog journal thread loop shutting down
2024-02-20T09:01:28.335-0500 I STORAGE  [signalProcessingThread] WiredTigerKVEngine shutting down

Any ideas re how to repair the database very welcome!

Cheers
Oli

Please can you post lines from the database log that precede those you already posted. You can access the log with the command

/home/user/software/cryosparc/cryosparc2_master/bin/cryosparcm log database
1 Like

Here you go:

2024-02-20T13:27:50.840-0500 I REPL     [replexec-0] Entering primary catch-up mode.
2024-02-20T13:27:50.840-0500 I REPL     [replexec-0] Exited primary catch-up mode.
2024-02-20T13:27:50.850-0500 I NETWORK  [listener] connection accepted from 127.0.0.1:52842 #1 (1 connection now open)
2024-02-20T13:27:50.851-0500 I NETWORK  [conn1] received client metadata from 127.0.0.1:52842 conn1: { driver: { name: "PyMongo", version: "3.13.0" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-135-generic" }, platform: "CPython 3.8.17.final.0" }
2024-02-20T13:27:50.854-0500 I NETWORK  [listener] connection accepted from 127.0.0.1:52844 #2 (2 connections now open)
2024-02-20T13:27:50.854-0500 I NETWORK  [conn2] received client metadata from 127.0.0.1:52844 conn2: { driver: { name: "PyMongo", version: "3.13.0" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-135-generic" }, platform: "CPython 3.8.17.final.0" }
2024-02-20T13:27:50.874-0500 I NETWORK  [conn1] end connection 127.0.0.1:52842 (1 connection now open)
2024-02-20T13:27:50.874-0500 I NETWORK  [conn2] end connection 127.0.0.1:52844 (0 connections now open)
2024-02-20T13:27:50.876-0500 I NETWORK  [listener] connection accepted from 127.0.0.1:52854 #3 (1 connection now open)
2024-02-20T13:27:50.876-0500 I NETWORK  [conn3] received client metadata from 127.0.0.1:52854 conn3: { driver: { name: "PyMongo", version: "3.13.0" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-135-generic" }, platform: "CPython 3.8.17.final.0" }
2024-02-20T13:27:52.801-0500 I REPL     [rsSync] transition to primary complete; database writes are now permitted
2024-02-20T13:27:52.882-0500 I NETWORK  [listener] connection accepted from 127.0.0.1:51148 #4 (2 connections now open)
2024-02-20T13:27:52.882-0500 I NETWORK  [conn4] received client metadata from 127.0.0.1:51148 conn4: { driver: { name: "PyMongo", version: "3.13.0" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-135-generic" }, platform: "CPython 3.8.17.final.0" }
2024-02-20T13:27:53.112-0500 I COMMAND  [conn4] command admin.system.users command: find { find: "system.users", filter: { db: "admin" }, projection: { authenticationRestrictions: 0, credentials: 0 }, sort: { user: 1, db: 1 }, $db: "admin" } planSummary: IXSCAN { user: 1, db: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:1 nreturned:2 reslen:623 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } } } protocol:op_msg 227ms
2024-02-20T13:27:53.112-0500 I COMMAND  [conn4] command admin.$cmd command: usersInfo { usersInfo: 1, lsid: { id: UUID("REMOVED") }, $clusterTime: { clusterTime: Timestamp(1708453672, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin", $readPreference: { mode: "primary" } } numYields:0 reslen:566 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } } } protocol:op_msg 227ms
2024-02-20T13:27:53.224-0500 I NETWORK  [conn3] end connection 127.0.0.1:52854 (1 connection now open)
2024-02-20T13:27:53.224-0500 I NETWORK  [conn4] end connection 127.0.0.1:51148 (0 connections now open)
2024-02-20T13:27:53.224-0500 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2024-02-20T13:27:53.224-0500 I NETWORK  [signalProcessingThread] shutdown: going to close listening sockets...
2024-02-20T13:27:53.224-0500 I NETWORK  [signalProcessingThread] removing socket file: /tmp/mongodb-39001.sock
2024-02-20T13:27:53.225-0500 I REPL     [signalProcessingThread] shutting down replication subsystems
2024-02-20T13:27:53.225-0500 I REPL     [signalProcessingThread] Stopping replication reporter thread
2024-02-20T13:27:53.225-0500 I REPL     [signalProcessingThread] Stopping replication fetcher thread
2024-02-20T13:27:53.225-0500 I REPL     [signalProcessingThread] Stopping replication applier thread
2024-02-20T13:27:53.800-0500 I REPL     [signalProcessingThread] Stopping replication storage threads
2024-02-20T13:27:53.801-0500 I FTDC     [signalProcessingThread] Shutting down full-time diagnostic data capture
2024-02-20T13:27:53.810-0500 I STORAGE  [WTOplogJournalThread] oplog journal thread loop shutting down
2024-02-20T13:27:53.810-0500 I STORAGE  [signalProcessingThread] WiredTigerKVEngine shutting down
2024-02-20T13:28:26.597-0500 I STORAGE  [signalProcessingThread] shutdown: removing fs lock...
2024-02-20T13:28:26.597-0500 I CONTROL  [signalProcessingThread] now exiting
2024-02-20T13:28:26.597-0500 I CONTROL  [signalProcessingThread] shutting down with code:0
2024-02-20T13:27:50.840-0500 I REPL     [replexec-0] Resetting sync source to empty, which was :27017
2024-02-20T13:27:50.840-0500 I REPL     [replexec-0] Entering primary catch-up mode.
2024-02-20T13:27:50.840-0500 I REPL     [replexec-0] Exited primary catch-up mode.
2024-02-20T13:27:50.850-0500 I NETWORK  [listener] connection accepted from 127.0.0.1:52842 #1 (1 connection now open)
2024-02-20T13:27:50.851-0500 I NETWORK  [conn1] received client metadata from 127.0.0.1:52842 conn1: { driver: { name: "PyMongo", version: "3.13.0" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-135-generic" }, platform: "CPython 3.8.17.final.0" }
2024-02-20T13:27:50.854-0500 I NETWORK  [listener] connection accepted from 127.0.0.1:52844 #2 (2 connections now open)
2024-02-20T13:27:50.854-0500 I NETWORK  [conn2] received client metadata from 127.0.0.1:52844 conn2: { driver: { name: "PyMongo", version: "3.13.0" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-135-generic" }, platform: "CPython 3.8.17.final.0" }
2024-02-20T13:27:50.874-0500 I NETWORK  [conn1] end connection 127.0.0.1:52842 (1 connection now open)
2024-02-20T13:27:50.874-0500 I NETWORK  [conn2] end connection 127.0.0.1:52844 (0 connections now open)
2024-02-20T13:27:50.876-0500 I NETWORK  [listener] connection accepted from 127.0.0.1:52854 #3 (1 connection now open)
2024-02-20T13:27:50.876-0500 I NETWORK  [conn3] received client metadata from 127.0.0.1:52854 conn3: { driver: { name: "PyMongo", version: "3.13.0" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-135-generic" }, platform: "CPython 3.8.17.final.0" }
2024-02-20T13:27:52.801-0500 I REPL     [rsSync] transition to primary complete; database writes are now permitted
2024-02-20T13:27:52.882-0500 I NETWORK  [listener] connection accepted from 127.0.0.1:51148 #4 (2 connections now open)
2024-02-20T13:27:52.882-0500 I NETWORK  [conn4] received client metadata from 127.0.0.1:51148 conn4: { driver: { name: "PyMongo", version: "3.13.0" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-135-generic" }, platform: "CPython 3.8.17.final.0" }
2024-02-20T13:27:53.112-0500 I COMMAND  [conn4] command admin.system.users command: find { find: "system.users", filter: { db: "admin" }, projection: { authenticationRestrictions: 0, credentials: 0 }, sort: { user: 1, db: 1 }, $db: "admin" } planSummary: IXSCAN { user: 1, db: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:1 nreturned:2 reslen:623 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } } } protocol:op_msg 227ms
2024-02-20T13:27:53.112-0500 I COMMAND  [conn4] command admin.$cmd command: usersInfo { usersInfo: 1, lsid: { id: UUID("REMOVED") }, $clusterTime: { clusterTime: Timestamp(1708453672, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin", $readPreference: { mode: "primary" } } numYields:0 reslen:566 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } } } protocol:op_msg 227ms
2024-02-20T13:27:53.224-0500 I NETWORK  [conn3] end connection 127.0.0.1:52854 (1 connection now open)
2024-02-20T13:27:53.224-0500 I NETWORK  [conn4] end connection 127.0.0.1:51148 (0 connections now open)
2024-02-20T13:27:53.224-0500 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2024-02-20T13:27:53.224-0500 I NETWORK  [signalProcessingThread] shutdown: going to close listening sockets...
2024-02-20T13:27:53.224-0500 I NETWORK  [signalProcessingThread] removing socket file: /tmp/mongodb-39001.sock
2024-02-20T13:27:53.225-0500 I REPL     [signalProcessingThread] shutting down replication subsystems
2024-02-20T13:27:53.225-0500 I REPL     [signalProcessingThread] Stopping replication reporter thread
2024-02-20T13:27:53.225-0500 I REPL     [signalProcessingThread] Stopping replication fetcher thread
2024-02-20T13:27:53.225-0500 I REPL     [signalProcessingThread] Stopping replication applier thread
2024-02-20T13:27:53.800-0500 I REPL     [signalProcessingThread] Stopping replication storage threads
2024-02-20T13:27:53.801-0500 I FTDC     [signalProcessingThread] Shutting down full-time diagnostic data capture
2024-02-20T13:27:53.810-0500 I STORAGE  [WTOplogJournalThread] oplog journal thread loop shutting down
2024-02-20T13:27:53.810-0500 I STORAGE  [signalProcessingThread] WiredTigerKVEngine shutting down
2024-02-20T13:28:26.597-0500 I STORAGE  [signalProcessingThread] shutdown: removing fs lock...
2024-02-20T13:28:26.597-0500 I CONTROL  [signalProcessingThread] now exiting
2024-02-20T13:28:26.597-0500 I CONTROL  [signalProcessingThread] shutting down with code:0
2024-02-20T13:27:50.800-0500 I REPL     [rsSync] conducting a dry run election to see if we could be elected. current term: 309
2024-02-20T13:27:50.800-0500 I REPL     [replexec-0] dry election run succeeded, running for election in term 310
2024-02-20T13:27:50.840-0500 I REPL     [replexec-0] election succeeded, assuming primary role in term 310
2024-02-20T13:27:50.840-0500 I REPL     [replexec-0] transition to PRIMARY from SECONDARY
2024-02-20T13:27:50.840-0500 I REPL     [replexec-0] Resetting sync source to empty, which was :27017
2024-02-20T13:27:50.840-0500 I REPL     [replexec-0] Entering primary catch-up mode.
2024-02-20T13:27:50.840-0500 I REPL     [replexec-0] Exited primary catch-up mode.
2024-02-20T13:27:50.850-0500 I NETWORK  [listener] connection accepted from 127.0.0.1:52842 #1 (1 connection now open)
2024-02-20T13:27:50.851-0500 I NETWORK  [conn1] received client metadata from 127.0.0.1:52842 conn1: { driver: { name: "PyMongo", version: "3.13.0" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-135-generic" }, platform: "CPython 3.8.17.final.0" }
2024-02-20T13:27:50.854-0500 I NETWORK  [listener] connection accepted from 127.0.0.1:52844 #2 (2 connections now open)
2024-02-20T13:27:50.854-0500 I NETWORK  [conn2] received client metadata from 127.0.0.1:52844 conn2: { driver: { name: "PyMongo", version: "3.13.0" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-135-generic" }, platform: "CPython 3.8.17.final.0" }
2024-02-20T13:27:50.874-0500 I NETWORK  [conn1] end connection 127.0.0.1:52842 (1 connection now open)
2024-02-20T13:27:50.874-0500 I NETWORK  [conn2] end connection 127.0.0.1:52844 (0 connections now open)
2024-02-20T13:27:50.876-0500 I NETWORK  [listener] connection accepted from 127.0.0.1:52854 #3 (1 connection now open)
2024-02-20T13:27:50.876-0500 I NETWORK  [conn3] received client metadata from 127.0.0.1:52854 conn3: { driver: { name: "PyMongo", version: "3.13.0" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-135-generic" }, platform: "CPython 3.8.17.final.0" }
2024-02-20T13:27:52.801-0500 I REPL     [rsSync] transition to primary complete; database writes are now permitted
2024-02-20T13:27:52.882-0500 I NETWORK  [listener] connection accepted from 127.0.0.1:51148 #4 (2 connections now open)
2024-02-20T13:27:52.882-0500 I NETWORK  [conn4] received client metadata from 127.0.0.1:51148 conn4: { driver: { name: "PyMongo", version: "3.13.0" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-135-generic" }, platform: "CPython 3.8.17.final.0" }
2024-02-20T13:27:53.112-0500 I COMMAND  [conn4] command admin.system.users command: find { find: "system.users", filter: { db: "admin" }, projection: { authenticationRestrictions: 0, credentials: 0 }, sort: { user: 1, db: 1 }, $db: "admin" } planSummary: IXSCAN { user: 1, db: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:1 nreturned:2 reslen:623 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } } } protocol:op_msg 227ms
2024-02-20T13:27:53.112-0500 I COMMAND  [conn4] command admin.$cmd command: usersInfo { usersInfo: 1, lsid: { id: UUID("REMOVED") }, $clusterTime: { clusterTime: Timestamp(1708453672, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin", $readPreference: { mode: "primary" } } numYields:0 reslen:566 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } } } protocol:op_msg 227ms
2024-02-20T13:27:53.224-0500 I NETWORK  [conn3] end connection 127.0.0.1:52854 (1 connection now open)
2024-02-20T13:27:53.224-0500 I NETWORK  [conn4] end connection 127.0.0.1:51148 (0 connections now open)
2024-02-20T13:27:53.224-0500 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2024-02-20T13:27:53.224-0500 I NETWORK  [signalProcessingThread] shutdown: going to close listening sockets...
2024-02-20T13:27:53.224-0500 I NETWORK  [signalProcessingThread] removing socket file: /tmp/mongodb-39001.sock
2024-02-20T13:27:53.225-0500 I REPL     [signalProcessingThread] shutting down replication subsystems
2024-02-20T13:27:53.225-0500 I REPL     [signalProcessingThread] Stopping replication reporter thread
2024-02-20T13:27:53.225-0500 I REPL     [signalProcessingThread] Stopping replication fetcher thread
2024-02-20T13:27:53.225-0500 I REPL     [signalProcessingThread] Stopping replication applier thread
2024-02-20T13:27:53.800-0500 I REPL     [signalProcessingThread] Stopping replication storage threads
2024-02-20T13:27:53.801-0500 I FTDC     [signalProcessingThread] Shutting down full-time diagnostic data capture
2024-02-20T13:27:53.810-0500 I STORAGE  [WTOplogJournalThread] oplog journal thread loop shutting down
2024-02-20T13:27:53.810-0500 I STORAGE  [signalProcessingThread] WiredTigerKVEngine shutting down
2024-02-20T13:28:26.597-0500 I STORAGE  [signalProcessingThread] shutdown: removing fs lock...
2024-02-20T13:28:26.597-0500 I CONTROL  [signalProcessingThread] now exiting
2024-02-20T13:28:26.597-0500 I CONTROL  [signalProcessingThread] shutting down with code:0

EDIT: removed license ids

Do these log entries really appear out-of-timestamp-sequence in the log file?
Please can you

  1. post the output of the command
    ps -weo pid,ppid,start,cmd | grep -e cryosparc -e mongo | grep -v grep
    
  2. email us the tgz file created by the command
    cryosparcm snaplogs

Do these log entries really appear out-of-timestamp-sequence in the log file?

Yes, that is the entire raw output of the command in your previous post…

  1. post the output of the command
    ps -weo pid,ppid,start,cmd | grep -e cryosparc -e mongo | grep -v grep

Here:

user@ubuntu:~$ ps -weo pid,ppid,start,cmd | grep -e cryosparc -e mongo | grep -v grep
3436485       1 13:27:21 python /home/user/software/cryosparc/cryosparc2_master/deps/anaconda/envs/cryosparc_master_env/bin/supervisord -c /home/user/software/cryosparc/cryosparc2_master/supervisord.conf

will send snaplogs via DM

@wtempel, after restarting the system, I now have a different error upon starting cryosparc:

CryoSPARC is running.
Stopping cryoSPARC
Shut down
Starting cryoSPARC System master process..
CryoSPARC is not already running.
configuring database
Warning: Could not get database status (attempt 1/3)
Warning: Could not get database status (attempt 2/3)
Warning: Could not get database status (attempt 3/3)
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/user/software/cryosparc/cryosparc2_master/cryosparc_compute/database_management.py", line 49, in configure_mongo
    initialize_replica_set()
  File "/home/user/software/cryosparc/cryosparc2_master/cryosparc_compute/database_management.py", line 88, in initialize_replica_set
    admin_db = try_get_pymongo_db(mongo_client)
  File "/home/user/software/cryosparc/cryosparc2_master/cryosparc_compute/database_management.py", line 251, in try_get_pymongo_db
    admin_db.command(({'serverStatus': 1}))
  File "/home/user/software/cryosparc/cryosparc2_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/database.py", line 827, in command
    with self.__client._socket_for_reads(read_preference, session) as (sock_info, secondary_ok):
  File "/home/user/software/cryosparc/cryosparc2_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/contextlib.py", line 113, in __enter__
    return next(self.gen)
  File "/home/user/software/cryosparc/cryosparc2_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/mongo_client.py", line 1478, in _socket_for_reads
    server = self._select_server(read_preference, session)
  File "/home/user/software/cryosparc/cryosparc2_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/mongo_client.py", line 1436, in _select_server
    server = topology.select_server(server_selector)
  File "/home/user/software/cryosparc/cryosparc2_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/topology.py", line 250, in select_server
    return random.choice(self.select_servers(selector, server_selection_timeout, address))
  File "/home/user/software/cryosparc/cryosparc2_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/topology.py", line 211, in select_servers
    server_descriptions = self._select_servers_loop(selector, server_timeout, address)
  File "/home/user/software/cryosparc/cryosparc2_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/topology.py", line 226, in _select_servers_loop
    raise ServerSelectionTimeoutError(
pymongo.errors.ServerSelectionTimeoutError: localhost:39001: [Errno 111] Connection refused, Timeout: 20.0s, Topology Description: <TopologyDescription id: 65d525d42a8f9982d2cf67b8, topology_type: Single, servers: [<ServerDescription ('localhost', 39001) server_type: Unknown, rtt: None, error=AutoReconnect('localhost:39001: [Errno 111] Connection refused')>]>
[2024-02-20T17:22:19-05:00] Error configuring database. Most recent database log lines:
2024-02-20T17:21:06.081-0500 I CONTROL  [initandlisten] git version: d352e6a4764659e0d0350ce77279de3c1f243e5c
2024-02-20T17:21:06.081-0500 I CONTROL  [initandlisten] allocator: tcmalloc
2024-02-20T17:21:06.081-0500 I CONTROL  [initandlisten] modules: none
2024-02-20T17:21:06.081-0500 I CONTROL  [initandlisten] build environment:
2024-02-20T17:21:06.081-0500 I CONTROL  [initandlisten]     distarch: x86_64
2024-02-20T17:21:06.081-0500 I CONTROL  [initandlisten]     target_arch: x86_64
2024-02-20T17:21:06.081-0500 I CONTROL  [initandlisten] options: { net: { port: 39001 }, replication: { oplogSizeMB: 64, replSet: "meteor" }, storage: { dbPath: "/ntfs_mount/ubuntu/cryosparc2_database/" } }
2024-02-20T17:21:06.190-0500 I STORAGE  [initandlisten] exception in initAndListen: DBPathInUse: Unable to create/open the lock file: /ntfs_mount/ubuntu/cryosparc2_database/mongod.lock (Read-only file system). Ensure the user executing mongod is the owner of the lock file and has the appropriate permissions. Also make sure that another mongod instance is not already running on the /ntfs_mount/ubuntu/cryosparc2_database/ directory, terminating
2024-02-20T17:21:06.191-0500 I CONTROL  [initandlisten] now exiting
2024-02-20T17:21:06.191-0500 I CONTROL  [initandlisten] shutting down with code:100

Here is the (truncated) output of ls -lrt in the database dir:

192K -rw-r--r-- 1 user user 168K Feb 19 23:18 collection-0-4603402377313034810.wt
352K -rw------- 1 user user 332K Feb 19 23:18 index-8--1960449374665408152.wt
160K -rw-r--r-- 1 user user 140K Feb 19 23:18 index-17--6263010608585214280.wt
288K -rw------- 1 user user 284K Feb 19 23:18 index-10--1960449374665408152.wt
2.3M -rw-r--r-- 1 user user 2.2M Feb 19 23:18 collection-7--6263010608585214280.wt
128K -rw-r--r-- 1 user user 128K Feb 19 23:18 collection-4--6263010608585214280.wt
 64K -rw------- 1 user user  36K Feb 19 23:26 index-21-1250800605155916434.wt
 64K -rw------- 1 user user  36K Feb 19 23:40 collection-20-1250800605155916434.wt
 64K -rw------- 1 user user  36K Feb 19 23:40 index-22-1250800605155916434.wt
 29M -rw-r--r-- 1 user user  29M Feb 19 23:43 index-2--5754591278060983432.wt
 49M -rw-r--r-- 1 user user  49M Feb 19 23:43 index-0--5754591278060983432.wt
322G -rw-r--r-- 1 user user 322G Feb 19 23:43 collection-1--5754591278060983432.wt
 32M -rw-r--r-- 1 user user  32M Feb 19 23:43 index-3--5754591278060983432.wt
 29M -rw-r--r-- 1 user user  29M Feb 19 23:43 index-27--6263010608585214280.wt
7.1M -rw-r--r-- 1 user user 7.1M Feb 19 23:43 index-28--6263010608585214280.wt
159M -rw-r--r-- 1 user user 159M Feb 19 23:43 collection-26--6263010608585214280.wt
7.6M -rw-r--r-- 1 user user 7.6M Feb 19 23:43 index-1-5332892330910402187.wt
 64K -rw------- 1 user user  36K Feb 19 23:43 index-5--168322677885930977.wt
583M -rw-r--r-- 1 user user 583M Feb 20 08:52 collection-12--6263010608585214280.wt
 64K -rw------- 1 user user  40K Feb 20 08:52 collection-4--168322677885930977.wt
113M -rw-r--r-- 1 user user 113M Feb 20 08:52 index-25--6263010608585214280.wt
161M -rw-r--r-- 1 user user 161M Feb 20 08:52 index-21--6263010608585214280.wt
1.1G -rw-r--r-- 1 user user 1.1G Feb 20 08:52 collection-20--6263010608585214280.wt
 32K drwxr-xr-x 1 user user  16K Feb 20 16:35 diagnostic.data
 64K -rw------- 1 user user  36K Feb 20 16:35 collection-6--168322677885930977.wt
 32K -rw------- 1 user user  16K Feb 20 16:35 index-8--168322677885930977.wt
 32K -rw------- 1 user user  16K Feb 20 16:35 collection-2--168322677885930977.wt
 32K -rw------- 1 user user  16K Feb 20 16:35 index-3--168322677885930977.wt
 32K -rw-r--r-- 1 user user  16K Feb 20 16:35 collection-5-477667761255365307.wt
 32K -rw------- 1 user user  16K Feb 20 16:35 collection-0--168322677885930977.wt
 32K -rw------- 1 user user  16K Feb 20 16:35 index-1--168322677885930977.wt
 64K -rw-r--r-- 1 user user  36K Feb 20 16:35 collection-12-477667761255365307.wt
 64K -rw-r--r-- 1 user user  36K Feb 20 16:35 collection-7-477667761255365307.wt
 64K -rw------- 1 user user  36K Feb 20 16:35 collection-0-570748436746071304.wt
 32K -rw-r--r-- 1 user user  16K Feb 20 16:35 collection-2-477667761255365307.wt
 64K -rw-r--r-- 1 user user  44K Feb 20 16:35 index-1-477667761255365307.wt
192K -rw-r--r-- 1 user user 172K Feb 20 16:35 collection-0-477667761255365307.wt
 64K -rw-r--r-- 1 user user  36K Feb 20 16:35 collection-9-477667761255365307.wt
 32K -rw-r--r-- 1 user user  32K Feb 20 16:35 index-10-477667761255365307.wt
 45M -rw-r--r-- 1 user user  45M Feb 20 16:35 collection-4-477667761255365307.wt
 64K -rw-r--r-- 1 user user  60K Feb 20 16:35 _mdb_catalog.wt
 64K -rw-r--r-- 1 user user  44K Feb 20 16:35 sizeStorer.wt
 32K -rw------- 1 user user 4.0K Feb 20 16:35 WiredTigerLAS.wt
 32K -rw------- 1 user user 1.1K Feb 20 16:35 WiredTiger.turtle
544K -rw-r--r-- 1 user user 524K Feb 20 16:35 WiredTiger.wt
 32K drwx------ 1 user user 4.0K Feb 20 16:36 journal
   0 -rw-r--r-- 1 user user    0 Feb 20 16:36 mongod.lock

Thoughts…? Should I change permissions as it has somehow become read only? Delete or move the .lock file?

This file already has write permissions for the owner, but maybe the filesystem for some reason is either served or mounted as read-only?

stat -f /ntfs_mount/ubuntu/cryosparc2_database/
df -h /ntfs_mount/ubuntu/cryosparc2_database/
mount | grep ntfs_mount
user@ubuntu:~$ stat -f /ntfs_mount/ubuntu/cryosparc2_database/
  File: "/ntfs_mount/ubuntu/cryosparc2_database/"
    ID: 0        Namelen: 255     Type: fuseblk
Block size: 32768      Fundamental block size: 32768
Blocks: Total: 2197192127 Free: 1810899    Available: 1810899
Inodes: Total: 64240224   Free: 58200038
user@ubuntu:~$ df -h /ntfs_mount/ubuntu/cryosparc2_database/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2        66T   66T   56G 100% /ntfs_mount
/dev/sda2 on /ntfs_mount type fuseblk (ro,nosuid,nodev,noatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)

EDIT: Whole filesystem is read only… I will investigate tomorrow.

OK, so after a clean reboot, things are back to normal (without futzing around with the database). This is a dual boot machine, so I suspect the second problem (permissions) was caused by an unclean shutdown from Windows.

I have no idea what the initial (different) problem with the database was, but the old faithful “turn it off and on again” solution seems to have done the trick!

1 Like