Database status error, cannot restart and backup cryoSPARC

Hey,

We have an issue with our cryoSPARC database. CryoSPARC is currently running, and I can access it through the web interface. However, I cannot restart or manually back up cryoSPARC. I get the following errors:

Backup:

(base) em7@em7:~$ cryosparcm backup --dir=/media/thor/cryoSPARC_backup/em7_20260127

Backing up to /media/thor/cryoSPARC_backup/em7_20260127/cryosparc_backup_2026_01_27_21h21.archive

CryoSPARC is not already running.

Starting the database in case it's not already running.
Starting CryoSPARC database
database: ERROR (spawn error)

Restarting cryoSPARC (It kept running and still works through the web interface):

(base) em7@em7:~$ cryosparcm restart
CryoSPARC is running.
Stopping cryoSPARC 
Shut down
Starting CryoSPARC System master process...
CryoSPARC is not already running.
configuring database...
Warning: Could not get database status (attempt 1/3)
Warning: Could not get database status (attempt 2/3)
Warning: Could not get database status (attempt 3/3)
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/em7/Software/cryosparc/cryosparc_master/cryosparc_compute/database_management.py", line 47, in configure_mongo
    initialize_replica_set()
  File "/home/em7/Software/cryosparc/cryosparc_master/cryosparc_compute/database_management.py", line 84, in initialize_replica_set
    admin_db = try_get_pymongo_db(mongo_client)
  File "/home/em7/Software/cryosparc/cryosparc_master/cryosparc_compute/database_management.py", line 251, in try_get_pymongo_db
    admin_db.command(({'serverStatus': 1}))
  File "/home/em7/Software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/_csot.py", line 108, in csot_wrapper
    return func(self, *args, **kwargs)
  File "/home/em7/Software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/database.py", line 897, in command
    return self._command(
  File "/home/em7/Software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/database.py", line 737, in _command
    return conn.command(
  File "/home/em7/Software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/helpers.py", line 342, in inner
    return func(*args, **kwargs)
  File "/home/em7/Software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/pool.py", line 989, in command
    return command(
  File "/home/em7/Software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/network.py", line 212, in command
    helpers._check_command_response(
  File "/home/em7/Software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/helpers.py", line 248, in _check_command_response
    raise OperationFailure(errmsg, code, response, max_wire_version)
pymongo.errors.OperationFailure: there are no users authenticated, full error: {'operationTime': Timestamp(1769545368, 1), 'ok': 0.0, 'errmsg': 'there are no users authenticated', 'code': 13, 'codeName': 'Unauthorized', '$clusterTime': {'clusterTime': Timestamp(1769545368, 1), 'signature': {'hash': b'L\x1b~\xa6\x1e\xa8?\xb9\xbe\x02I]\xadX\xcb6\x16\xb2\x9e\x83', 'keyId': 7567315140954030082}}}
[2026-01-27T21:22:51+01:00] Error configuring database. Most recent database log lines:
2026-01-27T21:22:38.321+0100 I STORAGE  [initandlisten] exception in initAndListen: DBPathInUse: Unable to lock the lock file: /home/em7/Software/cryosparc/cryosparc_database/mongod.lock (Resource temporarily unavailable). Another mongod instance is already running on the /home/em7/Software/cryosparc/cryosparc_database directory, terminating
2026-01-27T21:22:38.321+0100 I CONTROL  [initandlisten] now exiting
2026-01-27T21:22:38.321+0100 I CONTROL  [initandlisten] shutting down with code:100
2026-01-27T21:22:41.314+0100 I NETWORK  [listener] connection accepted from 127.0.0.1:44430 #19861 (178 connections now open)
2026-01-27T21:22:41.314+0100 I NETWORK  [conn19861] received client metadata from 127.0.0.1:44430 conn19861: { driver: { name: "PyMongo", version: "4.8.0" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "6.14.0-35-generic" }, platform: "CPython 3.10.18.final.0" }
2026-01-27T21:22:41.315+0100 I NETWORK  [listener] connection accepted from 127.0.0.1:44446 #19862 (179 connections now open)
2026-01-27T21:22:41.315+0100 I NETWORK  [conn19862] received client metadata from 127.0.0.1:44446 conn19862: { driver: { name: "PyMongo", version: "4.8.0" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "6.14.0-35-generic" }, platform: "CPython 3.10.18.final.0" }
2026-01-27T21:22:51.327+0100 I ACCESS   [conn19862] Unauthorized: not authorized on admin to execute command { endSessions: [ { id: UUID("b8386791-4f16-4d36-9c05-d12242b7735b") } ], $clusterTime: { clusterTime: Timestamp(1769545368, 1), signature: { hash: BinData(0, 4C1B7EA61EA83FB9BE02495DAD58CB3616B29E83), keyId: 7567315140954030082 } }, $db: "admin", $readPreference: { mode: "primaryPreferred" } }
2026-01-27T21:22:51.327+0100 I NETWORK  [conn19861] end connection 127.0.0.1:44430 (178 connections now open)
2026-01-27T21:22:51.327+0100 I NETWORK  [conn19862] end connection 127.0.0.1:44446 (177 connections now open)

I also checked the status of cryoSPARC:

(base) em7@em7:~$ cryosparcm status
----------------------------------------------------------------------------
CryoSPARC System master node installed at
/home/em7/Software/cryosparc/cryosparc_master
Current cryoSPARC version: v4.7.1-cuda12+250814
----------------------------------------------------------------------------

CryoSPARC process status:

app                              STOPPED   Not started
app_api                          STOPPED   Not started
app_api_dev                      STOPPED   Not started
command_core                     STOPPED   Not started
command_rtp                      STOPPED   Not started
command_vis                      STOPPED   Not started
database                         STOPPED   Not started

----------------------------------------------------------------------------
License is valid
----------------------------------------------------------------------------

global config variables:
export CRYOSPARC_LICENSE_ID="There is a license I just removed it"
export CRYOSPARC_MASTER_HOSTNAME="em7"
export CRYOSPARC_DB_PATH="/home/em7/Software/cryosparc/cryosparc_database"
export CRYOSPARC_BASE_PORT=39000
export CRYOSPARC_DB_CONNECTION_TIMEOUT_MS=20000
export CRYOSPARC_INSECURE=false
export CRYOSPARC_DB_ENABLE_AUTH=true
export CRYOSPARC_CLUSTER_JOB_MONITOR_INTERVAL=10
export CRYOSPARC_CLUSTER_JOB_MONITOR_MAX_RETRIES=1000000
export CRYOSPARC_PROJECT_DIR_PREFIX='CS-'
export CRYOSPARC_DEVELOP=false
export CRYOSPARC_CLICK_WRAP=true

We are currently running v4.7.1-cuda12+250814, Ubuntu 24.04.3 LTS, and we haven’t had any recent issues with crashing systems.

I would be very happy about help!

Best,

Ole

@OleUns Please can you post the outputs of these commands:

ps -eo user:12,pid,ppid,start,cmd | grep -e cryosparc_ -e mongo
ls -l /tmp/cryosparc*.sock /tmp/mongo*.sock

@wtempel This is the output:

(base) em7@em7:~$ ps -eo user:12,pid,ppid,start,cmd | grep -e cryosparc_ -e mongo
ls -l /tmp/cryosparc*.sock /tmp/mongo*.sock
em7            88949    2587 21:22:37 python /home/em7/Software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/supervisord -c /home/em7/Software/cryosparc/cryosparc_master/supervisord.conf
em7           482216  482176 17:06:08 grep --color=auto -e cryosparc_ -e mong
em7          2529787       1   Nov 19 python /home/em7/Software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/supervisord -c /home/em7/Software/cryosparc/cryosparc_master/supervisord.conf
em7          2529947 2529787   Nov 19 mongod --auth --dbpath /home/em7/Software/cryosparc/cryosparc_database --port 39001 --oplogSize 64 --replSet meteor --wiredTigerCacheSizeGB 4 --bind_ip_all
em7          2530090 2529787   Nov 19 python /home/em7/Software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/gunicorn -n command_core -b 0.0.0.0:39002 cryosparc_command.command_core:start() -c /home/em7/Software/cryosparc/cryosparc_master/gunicorn.conf.py
em7          2530091 2530090   Nov 19 python /home/em7/Software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/gunicorn -n command_core -b 0.0.0.0:39002 cryosparc_command.command_core:start() -c /home/em7/Software/cryosparc/cryosparc_master/gunicorn.conf.py
em7          2530217 2529787   Nov 19 /home/em7/Software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/python3.10 /home/em7/Software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/flask --app cryosparc_command.command_vis run -h 0.0.0.0 -p 39003 --with-threads
em7          2530262 2529787   Nov 19 python /home/em7/Software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/gunicorn cryosparc_command.command_rtp:start() -n command_rtp -b 0.0.0.0:39005 -c /home/em7/Software/cryosparc/cryosparc_master/gunicorn.conf.py
em7          2530267 2530262   Nov 19 python /home/em7/Software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/gunicorn cryosparc_command.command_rtp:start() -n command_rtp -b 0.0.0.0:39005 -c /home/em7/Software/cryosparc/cryosparc_master/gunicorn.conf.py
em7          2530349 2529787   Nov 19 /home/em7/Software/cryosparc/cryosparc_master/cryosparc_app/nodejs/bin/node ./bundle/main.js
srwx------ 1 em7 em7 0 Jan 27 21:22 /tmp/cryosparc-supervisor-72b9c14670698f15a83f7a9d3c37a37e.sock
srwx------ 1 em7 em7 0 Nov 19 08:53 /tmp/mongodb-39001.sock

Thanks @OleUns . The discrepancy in dates for the *.sock files suggests a premature removal of the /tmp/cryosparc-supervisor-72b9c14670698f15a83f7a9d3c37a37e.sock some time after Nov 19 08:53, which would disrupt consistent and reliable CryoSPARC operation. In the short term, you may restore you CryoSPARC instance into a consistent state by performing a thorough CryoSPARC shutdown, followed by a CryoSPARC startup.
You will also want to investigate how the *.sock file was lost prematurely. A possible explanation could be an incorrectly configured, automated clean-up of “temporary” files by the systemd-tmpfiles-clean service.
CryoSPARC v5, whose beta was released yesterday, no longer relies on the
/tmp/cryosparc-supervisor-*.sock file.

@wtempel Thanks!

In fact, I wanted to back up cryoSPARC to test the new v5-beta version.

In this case, you may want to try:

  1. the thorough shutdown procedure
  2. then the database backup
  3. then cryosparcm stop
  4. then cryosparcm update --version=5.0.0

Problem solved, thanks a lot for helping out!