Error cryosparc start: Could not get database status

Starting CryoSPARC System master process…
CryoSPARC is not already running.
configuring database…
Warning: Could not get database status (attempt 1/3)
Warning: Could not get database status (attempt 2/3)
Warning: Could not get database status (attempt 3/3)
Traceback (most recent call last):
File “”, line 1, in
File “/gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/cryosparc_compute/database_management.py”, line 47, in configure_mongo
initialize_replica_set()
File “/gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/cryosparc_compute/database_management.py”, line 84, in initialize_replica_set
admin_db = try_get_pymongo_db(mongo_client)
File “/gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/cryosparc_compute/database_management.py”, line 251, in try_get_pymongo_db
admin_db.command(({‘serverStatus’: 1}))
File “/gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/_csot.py”, line 108, in csot_wrapper
return func(self, *args, **kwargs)
File “/gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/database.py”, line 893, in command
with self.__client._conn_for_reads(read_preference, session, operation=command_name) as (
File “/gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/mongo_client.py”, line 1375, in _conn_for_reads
server = self._select_server(read_preference, session, operation)
File “/gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/mongo_client.py”, line 1322, in _select_server
server = topology.select_server(
File “/gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/topology.py”, line 368, in select_server
server = self._select_server(
File “/gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/topology.py”, line 346, in _select_server
servers = self.select_servers(
File “/gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/topology.py”, line 253, in select_servers
server_descriptions = self._select_servers_loop(
File “/gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/topology.py”, line 303, in _select_servers_loop
raise ServerSelectionTimeoutError(
pymongo.errors.ServerSelectionTimeoutError: localhost:39001: [Errno 111] Connection refused (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms), Timeout: 20.0s, Topology Description: <TopologyDescription id: 673a94517545dd4aaadd6474, topology_type: Single, servers: [<ServerDescription (‘localhost’, 39001) server_type: Unknown, rtt: None, error=AutoReconnect(‘localhost:39001: [Errno 111] Connection refused (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms)’)>]>
[2024-11-17T20:12:56-05:00] Error configuring database. Most recent database log lines:
2024-11-17T20:11:42.498-0500 W STORAGE [initandlisten] Recovering data from the last clean checkpoint.
2024-11-17T20:11:42.498-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=128303M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),compatibility=(release=“3.0”,require_max=“3.0”),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2024-11-17T20:11:42.880-0500 E STORAGE [initandlisten] WiredTiger error (11) [1731892302:880984][2442280:0x150306076480], wiredtiger_open: __posix_file_lock, 410: /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_database/WiredTiger.lock: handle-lock: fcntl: Resource temporarily unavailable Raw: [1731892302:880984][2442280:0x150306076480], wiredtiger_open: __posix_file_lock, 410: /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_database/WiredTiger.lock: handle-lock: fcntl: Resource temporarily unavailable
2024-11-17T20:11:42.881-0500 E STORAGE [initandlisten] WiredTiger error (16) [1731892302:881005][2442280:0x150306076480], wiredtiger_open: __conn_single, 1720: WiredTiger database is already being managed by another process: Device or resource busy Raw: [1731892302:881005][2442280:0x150306076480], wiredtiger_open: __conn_single, 1720: WiredTiger database is already being managed by another process: Device or resource busy
2024-11-17T20:11:42.881-0500 E - [initandlisten] Assertion: 28595:16: Device or resource busy src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp 488
2024-11-17T20:11:42.881-0500 I STORAGE [initandlisten] exception in initAndListen: Location28595: 16: Device or resource busy, terminating
2024-11-17T20:11:42.881-0500 I NETWORK [initandlisten] shutdown: going to close listening sockets…
2024-11-17T20:11:42.881-0500 I NETWORK [initandlisten] removing socket file: /tmp/mongodb-39001.sock
2024-11-17T20:11:42.881-0500 I CONTROL [initandlisten] now exiting
2024-11-17T20:11:42.881-0500 I CONTROL [initandlisten] shutting down with code:100

Welcome to the forum @dane2cryosparc .

Please can you post the outputs of these commands

grep -v LICENSE_ID /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/config.sh
ps -eo user:12,pid,ppid,start,command | grep -e cryosparc_ -e mongo
ls -l /tmp/mongo*.sock /tmp/cryosparc*.sock /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_database/WiredTiger.lock
cryosparcm log supervisord | tail -n40

Below are the outputs

[bh696@r205u34n01.mccleary project]$ grep -v LICENSE_ID /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/config.sh
# Instance Configuration
export CRYOSPARC_FORCE_HOSTNAME=true
export CRYOSPARC_FORCE_USER=true
export CRYOSPARC_DB_PATH="/gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_database"
export CRYOSPARC_BASE_PORT=39000
export CRYOSPARC_DB_CONNECTION_TIMEOUT_MS=20000

# Security
export CRYOSPARC_INSECURE=false
export CRYOSPARC_DB_ENABLE_AUTH=true

# Cluster Integration
export CRYOSPARC_CLUSTER_JOB_MONITOR_INTERVAL=10
export CRYOSPARC_CLUSTER_JOB_MONITOR_MAX_RETRIES=1000000

# Project Configuration
export CRYOSPARC_PROJECT_DIR_PREFIX='CS-'

# Development
export CRYOSPARC_DEVELOP=false

# Other
export CRYOSPARC_CLICK_WRAP=true
[bh696@r205u34n01.mccleary project]$ ps -eo user:12,pid,ppid,start,command | grep -e cryosparc_ -e mongo
bh696        2671243       1 16:21:58 python /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/supervisord -c /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/supervisord.conf
bh696        2671361 2671243 16:22:04 mongod --auth --dbpath /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_database --port 39001 --oplogSize 64 --replSet meteor --wiredTigerCacheSizeGB 4 --bind_ip_all
bh696        2671474 2671243 16:22:07 python /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/gunicorn -n command_core -b 0.0.0.0:39002 cryosparc_command.command_core:start() -c /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/gunicorn.conf.py
bh696        2671475 2671474 16:22:08 python /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/gunicorn -n command_core -b 0.0.0.0:39002 cryosparc_command.command_core:start() -c /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/gunicorn.conf.py
bh696        2671513 2671243 16:22:14 python /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/gunicorn cryosparc_command.command_vis:app -n command_vis -b 0.0.0.0:39003 -c /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/gunicorn.conf.py
bh696        2671514 2671513 16:22:14 python /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/gunicorn cryosparc_command.command_vis:app -n command_vis -b 0.0.0.0:39003 -c /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/gunicorn.conf.py
bh696        2671518 2671243 16:22:15 python /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/gunicorn cryosparc_command.command_rtp:start() -n command_rtp -b 0.0.0.0:39005 -c /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/gunicorn.conf.py
bh696        2671519 2671518 16:22:15 python /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/gunicorn cryosparc_command.command_rtp:start() -n command_rtp -b 0.0.0.0:39005 -c /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/gunicorn.conf.py
bh696        2671559 2671243 16:22:21 /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_master/cryosparc_app/nodejs/bin/node ./bundle/main.js
bh696        2672091 2669094 16:24:42 grep --color=auto -e cryosparc_ -e mongo
[bh696@r205u34n01.mccleary project]$ ls -l /tmp/mongo*.sock /tmp/cryosparc*.sock /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_database/WiredTiger.lock
-rw------- 1 bh696 reinisch 21 Nov 17 17:14 /gpfs/gibbs/project/reinisch/bh696/cryosparc/cryosparc_database/WiredTiger.lock
srwx------ 1 bh696 reinisch  0 Nov 18 16:21 /tmp/cryosparc-supervisor-bf5cf6111a83ea059a4e94de68885cc8.sock
srwx------ 1 bh696 reinisch  0 Nov 18 16:22 /tmp/mongodb-39001.sock
[bh696@r205u34n01.mccleary project]$ cryosparcm log supervisord | tail -n40

Now I can start cryosparc, but I have problems when queue job in cryosparc.
Here is the error message

License is valid.

Launching job on lane default target r102u17n01.mccleary.ycrc.yale.edu …

Running job on remote worker node hostname r102u17n01.mccleary.ycrc.yale.edu

Failed to launch! 255
Access denied by pam_slurm_adopt: you have no active jobs on this node
Connection closed by 10.178.102.33 port 22

CRYOSPARC_FORCE_* variables are not normally needed, and increase the the risk of disruption. These definitions may not be required if you follow instructions specific to your institution.

Could it be that the worker information is outdated, and still refers to a node that might need to be removed from the scheduler configuration manually (as described by your institution)?
What is the output of the command

cryosparcm cli "get_scheduler_targets()"

?

You may want to try

Hello,
It might be the problem that the below setting in the cryosparc_master/config.sh

export CRYOSPARC_FORCE_HOSTNAME=true
export CRYOSPARC_FORCE_USER=true

I re-installed the cryosparc and followed the instruction of our institution. It works fine.
Thank you.