Database configuration error with time mismatch

Dear CryoSPARC community,

This is related to version 4.0.1 installed on a single workstation with Ubuntu 20.04.1 LTS. Everything worked fine till one day we got a hardware (mainboard) replacement. Before the workstation was connected back to the internet, somehow the system time was set to a future date in the year 2034. CryoSPARC started OK. After internet access got recovered, the system time was also automatically corrected. But then cryosparcm start gave the following error messages:

Starting cryoSPARC System master process…
CryoSPARC is not already running.
configuring database
Traceback (most recent call last):
File “”, line 1, in
File “/home/cryosparcuser/cryosparc/cryosparc_master/cryosparc_compute/database_management.py”, line 41, in configure_mongo
initializeReplicaSet()
File “/home/cryosparcuser/cryosparc/cryosparc_master/cryosparc_compute/database_management.py”, line 82, in initializeReplicaSet
admin_db.command(({‘serverStatus’: 1}))
File “/home/cryosparcuser/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/site-packages/pymongo/database.py”, line 761, in command
codec_options, session=session, **kwargs)
File “/home/cryosparcuser/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/site-packages/pymongo/database.py”, line 652, in _command
client=self.__client)
File “/home/cryosparcuser/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/site-packages/pymongo/pool.py”, line 721, in command
exhaust_allowed=exhaust_allowed)
File “/home/cryosparcuser/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/site-packages/pymongo/network.py”, line 163, in command
parse_write_concern_error=parse_write_concern_error)
File “/home/cryosparcuser/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/site-packages/pymongo/helpers.py”, line 167, in _check_command_response
raise OperationFailure(errmsg, code, response, max_wire_version)
pymongo.errors.OperationFailure: New cluster time, 2034865471, is too far from this node’s wall clock time, 1725445269., full error: {‘operationTime’: Timestamp(2034865471, 8), ‘ok’: 0.0, ‘errmsg’: “New cluster time, 2034865471, is too far from this node’s wall clock time, 1725445269.”, ‘code’: 205, ‘codeName’: ‘ClusterTimeFailsRateLimiter’, ‘$clusterTime’: {‘clusterTime’: Timestamp(2034865471, 9), ‘signature’: {‘hash’: b’\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00’, ‘keyId’: 0}}}

cryosparcm status gave the following:

CryoSPARC System master node installed at
/home/cryosparcuser/cryosparc/cryosparc_master
Current cryoSPARC version: v4.0.1

CryoSPARC process status:

app STOPPED Not started
app_api STOPPED Not started
app_api_dev STOPPED Not started
app_legacy STOPPED Not started
app_legacy_dev STOPPED Not started
command_core STOPPED Not started
command_rtp STOPPED Not started
command_vis STOPPED Not started
database STOPPED Not started


Connection Refused: Server at http://amax:39002/api is not accepting new connections. Is the server running?
An error ocurred while checking license status
Could not get license verification status. Are all CryoSPARC processes RUNNING?

And this is the output from cryosparcm log database:

2024-09-03T15:00:42.919+0800 I NETWORK [listener] connection accepted from 127.0.0.1:43282 340 (10 connections now open)
2024-09-03T15:00:42.920+0800 I NETWORK [conn340] received client metadata from 127.0.0.1:43282 conn340: { driver: { name: “nodejs”, version: “4.3.1” }, os: { type: “Linux”, name: “linux”, architecture: “x64”, version: “5.15.0-119-generic” }, platform: “Node.js v14.19.3, LE (unified)|Node.js v14.19.3, LE (unified)” }
2024-09-03T15:00:42.923+0800 I NETWORK [conn340] end connection 127.0.0.1:43282 (9 connections now open)
2024-09-03T15:00:43.323+0800 I NETWORK [conn332] end connection 127.0.0.1:53752 (8 connections now open)
2024-09-03T15:00:43.324+0800 I NETWORK [listener] connection accepted from 127.0.0.1:35002 #341 (9 connections now open)
2024-09-03T15:00:43.324+0800 I NETWORK [conn341] received client metadata from 127.0.0.1:35002 conn341: { driver: { name: “PyMongo”, version: “3.12.2” }, os: { type: “Linux”, name: “Linux”, architecture: “x86_64”, version: “5.15.0-119-generic” }, platform: “CPython 3.7.12.final.0” }
2024-09-03T15:00:44.290+0800 I NETWORK [conn333] end connection 127.0.0.1:49444 (8 connections now open)
2024-09-03T15:00:45.747+0800 I NETWORK [conn334] end connection 127.0.0.1:53754 (7 connections now open)
2024-09-03T15:00:45.747+0800 I NETWORK [conn335] end connection 127.0.0.1:53768 (6 connections now open)
2024-09-03T15:00:45.747+0800 I NETWORK [listener] connection accepted from 127.0.0.1:35004 #342 (7 connections now open)
2024-09-03T15:00:45.748+0800 I NETWORK [conn342] received client metadata from 127.0.0.1:35004 conn342: { driver: { name: “PyMongo”, version: “3.12.2” }, os: { type: “Linux”, name: “Linux”, architecture: “x86_64”, version: “5.15.0-119-generic” }, platform: “CPython 3.7.12.final.0” }
2024-09-03T15:00:46.416+0800 I NETWORK [listener] connection accepted from 127.0.0.1:35012 #343 (8 connections now open)
2024-09-03T15:00:46.416+0800 I NETWORK [conn343] received client metadata from 127.0.0.1:35012 conn343: { driver: { name: “PyMongo”, version: “3.12.2” }, os: { type: “Linux”, name: “Linux”, architecture: “x86_64”, version: “5.15.0-119-generic” }, platform: “CPython 3.7.12.final.0” }
2024-09-03T15:00:46.418+0800 I ACCESS [conn343] Successfully authenticated as principal cryosparc_user on admin from client 127.0.0.1:35012
2024-09-03T15:00:47.934+0800 I NETWORK [conn336] end connection 127.0.0.1:43242 (7 connections now open)
2024-09-03T15:00:48.672+0800 I NETWORK [listener] connection accepted from 127.0.0.1:35824 #344 (8 connections now open)
2024-09-03T15:00:48.672+0800 I NETWORK [conn344] received client metadata from 127.0.0.1:35824 conn344: { driver: { name: “nodejs”, version: “4.3.1” }, os: { type: “Linux”, name: “linux”, architecture: “x64”, version: “5.15.0-119-generic” }, platform: “Node.js v14.19.3, LE (unified)|Node.js v14.19.3, LE (unified)” }
2024-09-03T15:00:52.333+0800 I NETWORK [conn337] end connection 127.0.0.1:34988 (7 connections now open)
2024-09-03T15:00:52.334+0800 I NETWORK [listener] connection accepted from 127.0.0.1:51510 #345 (8 connections now open)
2024-09-03T15:00:52.334+0800 I NETWORK [conn345] received client metadata from 127.0.0.1:51510 conn345: { driver: { name: “PyMongo”, version: “3.12.2” }, os: { type: “Linux”, name: “Linux”, architecture: “x86_64”, version: “5.15.0-119-generic” }, platform: “CPython 3.7.12.final.0” }
2024-09-03T15:00:52.413+0800 I NETWORK [conn338] end connection 127.0.0.1:43252 (7 connections now open)
2024-09-03T15:00:52.471+0800 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2024-09-03T15:00:52.471+0800 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets…
2024-09-03T15:00:52.471+0800 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-39001.sock
2024-09-03T15:00:52.472+0800 I REPL [signalProcessingThread] shutting down replication subsystems
2024-09-03T15:00:52.472+0800 I REPL [signalProcessingThread] Stopping replication reporter thread
2024-09-03T15:00:52.472+0800 I REPL [signalProcessingThread] Stopping replication fetcher thread
2024-09-03T15:00:52.472+0800 I REPL [signalProcessingThread] Stopping replication applier thread
2024-09-03T15:00:52.491+0800 I NETWORK [conn20] end connection 127.0.0.1:42388 (6 connections now open)
2024-09-03T15:00:52.491+0800 I NETWORK [conn344] end connection 127.0.0.1:35824 (5 connections now open)
2024-09-03T15:00:52.617+0800 I REPL [signalProcessingThread] Stopping replication storage threads
2024-09-03T15:00:52.617+0800 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
2024-09-03T15:00:52.626+0800 I STORAGE [WTOplogJournalThread] oplog journal thread loop shutting down
2024-09-03T15:00:52.627+0800 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
2024-09-03T15:00:52.650+0800 I STORAGE [signalProcessingThread] shutdown: removing fs lock…
2024-09-03T15:00:52.650+0800 I CONTROL [signalProcessingThread] now exiting
2024-09-03T15:00:52.650+0800 I CONTROL [signalProcessingThread] shutting down with code:0

Could anyone give any suggestion on how to solve this problem? All the files are still there, it seems to be just the timestamp mismatch. I have not backed up my database (will definitely do that on a regular basis in the future). Would a CryoSPARC update be able to fix the problem?

Thank you very much!

Welcome to the forum @yrguo .
How many active projects/project directories does your CryoSPARC instance have?

Thanks a lot for your reply!

There are over 30 unfinished projects, about 7 being currently active.

@yrguo You may want to try initially:

  1. a complete shutdown of CryoSPARC and associated processes
  2. then the command (guide)
    cryosparcm backup --dir=/abs/path/to/dir_with_enough_space
    
    Please let us know if that command succeeds. If the command fails, please let us know the error message.

Thanks for your suggestion. This is the error message I got with the backup command:

Backing up to /home/cryosparcuser/cryosparc/cryosparc_database/backup/cryosparc_backup_2024_09_12_08h45.archive

CryoSPARC is running.
Starting the database in case it’s not already running.
Starting CryoSPARC database
database: started
Warning: Could not get database status (attempt 1/3)
Warning: Could not get database status (attempt 2/3)
Warning: Could not get database status (attempt 3/3)
Could not get replica set status; please reconfigure the database with cryosparcm configuredb
Traceback (most recent call last):
File “”, line 1, in
File “/home/cryosparcuser/cryosparc/cryosparc_master/cryosparc_compute/database_management.py”, line 270, in check_mongo
admin_db = try_get_pymongo_db(mongo_client)
File “/home/cryosparcuser/cryosparc/cryosparc_master/cryosparc_compute/database_management.py”, line 251, in try_get_pymongo_db
admin_db.command(({‘serverStatus’: 1}))
File “/home/cryosparcuser/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/_csot.py”, line 107, in csot_wrapper
return func(self, *args, **kwargs)
File “/home/cryosparcuser/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/database.py”, line 894, in command
return self._command(
File “/home/cryosparcuser/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/database.py”, line 743, in _command
return conn.command(
File “/home/cryosparcuser/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/helpers.py”, line 322, in inner
return func(*args, **kwargs)
File “/home/cryosparcuser/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/pool.py”, line 968, in command
return command(
File “/home/cryosparcuser/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/network.py”, line 192, in command
helpers._check_command_response(
File “/home/cryosparcuser/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/helpers.py”, line 230, in _check_command_response
raise OperationFailure(errmsg, code, response, max_wire_version)
pymongo.errors.OperationFailure: New cluster time, 2034865471, is too far from this node’s wall clock time, 1726101967., full error: {‘operationTime’: Timestamp(2034865471, 22), ‘ok’: 0.0, ‘errmsg’: “New cluster time, 2034865471, is too far from this node’s wall clock time, 1726101967.”, ‘code’: 205, ‘codeName’: ‘ClusterTimeFailsRateLimiter’, ‘$clusterTime’: {‘clusterTime’: Timestamp(2034865471, 22), ‘signature’: {‘hash’: b’Z\xb6!\xb4\xf8\x8d#\xd4\xf6\xbdJq\x82LST\x18\xee\x85\x7f’, ‘keyId’: 8737845809611014145}}}
[2024-09-12T08:46:08+08:00] Error checking database. Most recent database log lines:
2024-09-12T08:45:57.447+0800 I ACCESS [conn4] Successfully authenticated as principal cryosparc_admin on admin from client 127.0.0.1:52676
2024-09-12T08:45:57.928+0800 I NETWORK [conn3] end connection 127.0.0.1:52660 (1 connection now open)
2024-09-12T08:45:57.929+0800 I NETWORK [conn4] end connection 127.0.0.1:52676 (0 connections now open)
2024-09-12T08:45:57.929+0800 I NETWORK [listener] connection accepted from 127.0.0.1:52684 #5 (1 connection now open)
2024-09-12T08:45:57.929+0800 I NETWORK [conn5] received client metadata from 127.0.0.1:52684 conn5: { driver: { name: “PyMongo”, version: “4.6.2” }, os: { type: “Linux”, name: “Linux”, architecture: “x86_64”, version: “5.15.0-119-generic” }, platform: “CPython 3.10.13.final.0” }
2024-09-12T08:46:02.454+0800 I NETWORK [listener] connection accepted from 127.0.0.1:52690 #6 (2 connections now open)
2024-09-12T08:46:02.454+0800 I NETWORK [conn6] received client metadata from 127.0.0.1:52690 conn6: { driver: { name: “PyMongo”, version: “4.6.2” }, os: { type: “Linux”, name: “Linux”, architecture: “x86_64”, version: “5.15.0-119-generic” }, platform: “CPython 3.10.13.final.0” }
2024-09-12T08:46:02.458+0800 I ACCESS [conn6] Successfully authenticated as principal cryosparc_admin on admin from client 127.0.0.1:52690
2024-09-12T08:46:07.466+0800 I NETWORK [conn5] end connection 127.0.0.1:52684 (1 connection now open)
2024-09-12T08:46:07.466+0800 I NETWORK [conn6] end connection 127.0.0.1:52690 (0 connections now open)

Before this trial, I have updated to v4.5.3 with the most recent patch. So things might look different now. Is there anything I could do from this point? Thank you very much!

There may be ways to recover the database with some lower-level mongo commands, but I am not familiar with them. My next recommendation would, unfortunately, be a database rebuild.

Many thanks for the detailed instructions. I have rebuilt the database and it’s working now.