Hi,
I faced a new problem after moving the Rocky linux back to the original network setting,
“Could not get database status”
I had this issue before but can resolve it by restarting the computer. this time, I tried many times and failed to overpass this step.
Should I run update the cryosparc (already latest one) to solve this problem?
Thanks you for your imports and advice.
Lan
@Lan Please can you post the outputs of these commands:
ps -eo user,pid,ppid,start,command | grep -e cryosparc_ -e mongo
ls -l /tmp/mongo*.sock /tmp/cryosparc*.sock
cryosparcm log database | tail -n 60
cryosparcm log command_core | tail -n 40
cryosparcm log supervisord | tail -n 40
cryosparcm env | grep -v LICENSE_ID
(base) [cryosparc_user@r16763 ~]$ ps -eo user,pid,ppid,start,command | grep -e cryosparc_ -e mongo
cryospa+ 192718 192631 11:44:54 grep --color=auto -e cryosparc_ -e mongo
(base) [cryosparc_user@r16763 ~]$ ls -l /tmp/mongo*.sock /tmp/cryosparc*.sock
ls: cannot access '/tmp/mongo*.sock': No such file or directory
ls: cannot access '/tmp/cryosparc*.sock': No such file or directory
(base) [cryosparc_user@r16763 ~]$ cryosparcm log database | tail -n 60
2024-10-28T16:11:07.531-0500 I STORAGE [initandlisten] The size storer reports that the oplog contains 33616 records totaling to 61692647 bytes
2024-10-28T16:11:07.531-0500 I STORAGE [initandlisten] Sampling the oplog to determine where to place markers for truncation
2024-10-28T16:11:07.532-0500 I STORAGE [initandlisten] Sampling from the oplog between Oct 25 20:53:52:1 and Oct 28 16:11:05:3 to determine where to place markers for truncation
2024-10-28T16:11:07.532-0500 I STORAGE [initandlisten] Taking 91 samples and assuming that each section of oplog contains approximately 3657 records totaling to 6711387 bytes
2024-10-28T16:11:07.537-0500 I STORAGE [initandlisten] Placing a marker at optime Oct 25 21:08:06:2
2024-10-28T16:11:07.537-0500 I STORAGE [initandlisten] Placing a marker at optime Oct 25 21:24:49:4
2024-10-28T16:11:07.537-0500 I STORAGE [initandlisten] Placing a marker at optime Oct 25 21:43:02:1
2024-10-28T16:11:07.537-0500 I STORAGE [initandlisten] Placing a marker at optime Oct 25 22:00:24:1
2024-10-28T16:11:07.537-0500 I STORAGE [initandlisten] Placing a marker at optime Oct 25 22:17:56:3
2024-10-28T16:11:07.537-0500 I STORAGE [initandlisten] Placing a marker at optime Oct 25 22:35:23:2
2024-10-28T16:11:07.537-0500 I STORAGE [initandlisten] Placing a marker at optime Oct 25 22:43:03:2
2024-10-28T16:11:07.537-0500 I STORAGE [initandlisten] Placing a marker at optime Oct 26 17:29:01:1296
2024-10-28T16:11:07.537-0500 I STORAGE [initandlisten] Placing a marker at optime Oct 28 01:55:43:1
2024-10-28T16:11:07.537-0500 I STORAGE [initandlisten] WiredTiger record store oplog processing took 5ms
2024-10-28T16:11:07.539-0500 I CONTROL [initandlisten]
2024-10-28T16:11:07.539-0500 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine.
2024-10-28T16:11:07.539-0500 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems:
2024-10-28T16:11:07.539-0500 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options]
2024-10-28T16:11:07.539-0500 I CONTROL [initandlisten]
2024-10-28T16:11:07.539-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2024-10-28T16:11:07.539-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2024-10-28T16:11:07.539-0500 I CONTROL [initandlisten]
2024-10-28T16:11:07.547-0500 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/home/cryosparc_user/software/cryosparc/cryosparc_database/diagnostic.data'
2024-10-28T16:11:07.549-0500 I REPL [initandlisten] Rollback ID is 1
2024-10-28T16:11:07.551-0500 I REPL [initandlisten] No oplog entries to apply for recovery. appliedThrough and checkpointTimestamp are both null.
2024-10-28T16:11:07.551-0500 I NETWORK [initandlisten] listening via socket bound to 0.0.0.0
2024-10-28T16:11:07.551-0500 I NETWORK [initandlisten] listening via socket bound to /tmp/mongodb-39001.sock
2024-10-28T16:11:07.551-0500 I NETWORK [initandlisten] waiting for connections on port 39001
2024-10-28T16:11:07.551-0500 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2024-10-28T16:11:07.551-0500 I REPL [replexec-0] New replica set config in use: { _id: "meteor", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:39001", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: 60000, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('62c47eb65c9833fc764f6013') } }
2024-10-28T16:11:07.551-0500 I REPL [replexec-0] This node is localhost:39001 in the config
2024-10-28T16:11:07.551-0500 I REPL [replexec-0] transition to STARTUP2 from STARTUP
2024-10-28T16:11:07.551-0500 I REPL [replexec-0] Starting replication storage threads
2024-10-28T16:11:07.552-0500 I REPL [replexec-0] transition to RECOVERING from STARTUP2
2024-10-28T16:11:07.552-0500 I REPL [replexec-0] Starting replication fetcher thread
2024-10-28T16:11:07.552-0500 I REPL [replexec-0] Starting replication applier thread
2024-10-28T16:11:07.552-0500 I REPL [replexec-0] Starting replication reporter thread
2024-10-28T16:11:07.552-0500 I REPL [rsSync] transition to SECONDARY from RECOVERING
2024-10-28T16:11:07.552-0500 I REPL [rsSync] conducting a dry run election to see if we could be elected. current term: 163
2024-10-28T16:11:07.552-0500 I REPL [replexec-0] dry election run succeeded, running for election in term 164
2024-10-28T16:11:07.553-0500 I REPL [replexec-0] election succeeded, assuming primary role in term 164
2024-10-28T16:11:07.553-0500 I REPL [replexec-0] transition to PRIMARY from SECONDARY
2024-10-28T16:11:07.553-0500 I REPL [replexec-0] Resetting sync source to empty, which was :27017
2024-10-28T16:11:07.553-0500 I REPL [replexec-0] Entering primary catch-up mode.
2024-10-28T16:11:07.553-0500 I REPL [replexec-0] Exited primary catch-up mode.
2024-10-28T16:11:09.553-0500 I REPL [rsSync] transition to primary complete; database writes are now permitted
2024-10-28T16:14:18.485-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2024-10-28T16:14:18.485-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
2024-10-28T16:14:18.485-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-39001.sock
2024-10-28T16:14:18.485-0500 I REPL [signalProcessingThread] shutting down replication subsystems
2024-10-28T16:14:18.485-0500 I REPL [signalProcessingThread] Stopping replication reporter thread
2024-10-28T16:14:18.485-0500 I REPL [signalProcessingThread] Stopping replication fetcher thread
2024-10-28T16:14:18.485-0500 I REPL [signalProcessingThread] Stopping replication applier thread
2024-10-28T16:14:18.572-0500 I REPL [signalProcessingThread] Stopping replication storage threads
2024-10-28T16:14:18.573-0500 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
2024-10-28T16:14:18.577-0500 I STORAGE [WTOplogJournalThread] oplog journal thread loop shutting down
2024-10-28T16:14:18.577-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
2024-10-28T16:14:18.673-0500 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
2024-10-28T16:14:18.673-0500 I CONTROL [signalProcessingThread] now exiting
2024-10-28T16:14:18.673-0500 I CONTROL [signalProcessingThread] shutting down with code:0
(base) [cryosparc_user@r16763 ~]$ cryosparcm log command_core | tail -n 40
2024-10-26 18:33:24,270 scheduler_run_core INFO | Licenses currently active : 0
2024-10-26 18:33:24,270 scheduler_run_core INFO | Now trying to schedule J1178
2024-10-26 18:33:24,270 scheduler_run_job INFO | Scheduling job to r16763
2024-10-26 18:33:24,652 scheduler_run_job WARNING | Error connecting to cryoSPARC license server. Checking local license file. <urlopen error [Errno -2] Name or service not known>
2024-10-26 18:33:24,655 scheduler_run_core INFO | Queue status waiting_licenses
2024-10-26 18:33:24,655 scheduler_run_core INFO | Queue message Error connecting to cryoSPARC license server. Checking local license file. <urlopen error [Errno -2] Name or service not known>Local license file is expired. Please re-establish your connection to the license servers.
2024-10-26 18:33:24,656 scheduler_run_core INFO | Licenses currently active : 0
2024-10-26 18:33:24,656 scheduler_run_core INFO | Now trying to schedule J66
2024-10-26 18:33:24,656 scheduler_run_job INFO | Scheduling job to r16763
2024-10-26 18:33:25,037 scheduler_run_job WARNING | Error connecting to cryoSPARC license server. Checking local license file. <urlopen error [Errno -2] Name or service not known>
2024-10-26 18:33:25,039 scheduler_run_core INFO | Queue status waiting_licenses
2024-10-26 18:33:25,040 scheduler_run_core INFO | Queue message Error connecting to cryoSPARC license server. Checking local license file. <urlopen error [Errno -2] Name or service not known>Local license file is expired. Please re-establish your connection to the license servers.
2024-10-26 18:33:25,040 scheduler_run_core INFO | Finished
2024-10-26 18:33:30,065 scheduler_run_core INFO | Running...
2024-10-26 18:33:30,066 scheduler_run_core INFO | Jobs Queued: [('P14', 'J1178'), ('P15', 'J66')]
2024-10-26 18:33:30,067 scheduler_run_core INFO | Licenses currently active : 0
2024-10-26 18:33:30,067 scheduler_run_core INFO | Now trying to schedule J1178
2024-10-26 18:33:30,067 scheduler_run_job INFO | Scheduling job to r16763
2024-10-26 18:33:34,232 scheduler_run_job INFO | Not a commercial instance - heartbeat set to 12 hours.
2024-10-26 18:33:34,614 scheduler_run_job INFO | Launchable! -- Launching.
2024-10-26 18:33:34,617 set_job_status INFO | Status changed for P14.J1178 from queued to launched
2024-10-26 18:33:34,618 app_stats_refresh INFO | Calling app stats refresh url http://r16763:39000/api/actions/stats/refresh_job for project_uid P14, workspace_uid None, job_uid J1178 with body {'projectUid': 'P14', 'jobUid': 'J1178'}
2024-10-26 18:33:36,624 app_stats_refresh WARNING | Failed to call stats refresh endpoint for P14 J1178: HTTPConnectionPool(host='r16763', port=39000): Max retries exceeded with url: /api/actions/stats/refresh_job (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f9f8019bd30>, 'Connection to r16763 timed out. (connect timeout=2)'))
2024-10-26 18:33:36,628 run_job INFO | Running P14 J1178
2024-10-26 18:33:36,628 run_job INFO | Running job using: /home/cryosparc_user/software/cryosparc/cryosparc_worker/bin/cryosparcw
2024-10-26 18:33:36,630 scheduler_run_core INFO | Licenses currently active : 1
2024-10-26 18:33:36,630 scheduler_run_core INFO | Now trying to schedule J66
2024-10-26 18:33:36,631 scheduler_run_job INFO | Scheduling job to r16763
2024-10-26 18:33:38,001 scheduler_run_job INFO | Not a commercial instance - heartbeat set to 12 hours.
2024-10-26 18:33:38,379 scheduler_run_job INFO | Launchable! -- Launching.
2024-10-26 18:33:38,382 set_job_status INFO | Status changed for P15.J66 from queued to launched
2024-10-26 18:33:38,382 app_stats_refresh INFO | Calling app stats refresh url http://r16763:39000/api/actions/stats/refresh_job for project_uid P15, workspace_uid None, job_uid J66 with body {'projectUid': 'P15', 'jobUid': 'J66'}
2024-10-26 18:33:40,387 app_stats_refresh WARNING | Failed to call stats refresh endpoint for P15 J66: HTTPConnectionPool(host='r16763', port=39000): Max retries exceeded with url: /api/actions/stats/refresh_job (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f9f8019bb20>, 'Connection to r16763 timed out. (connect timeout=2)'))
2024-10-26 18:33:40,390 run_job INFO | Running P15 J66
2024-10-26 18:33:40,390 run_job INFO | Running job using: /home/cryosparc_user/software/cryosparc/cryosparc_worker/bin/cryosparcw
2024-10-26 18:33:40,391 scheduler_run_core INFO | Finished
2024-10-26 18:47:37 info INFO | Handling signal: quit
2024-10-26 18:47:37 info INFO | Worker exiting (pid: 4621)
2024-10-26 18:47:37,916 interrupt INFO | === EXITED ===
2024-10-26 18:47:38 info INFO | Shutting down: Master
(base) [cryosparc_user@r16763 ~]$ cryosparcm log supervisord | tail -n 40
2024-10-27 18:50:50,625 INFO RPC interface 'supervisor' initialized
2024-10-27 18:50:50,625 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2024-10-27 18:50:50,627 INFO daemonizing the supervisord process
2024-10-27 18:50:50,629 INFO supervisord started with pid 4818
2024-10-27 18:50:55,547 INFO spawned: 'database' with pid 4929
2024-10-27 18:50:57,499 INFO success: database entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-10-27 18:52:31,797 INFO waiting for database to stop
2024-10-27 18:52:32,565 INFO stopped: database (exit status 0)
2024-10-27 18:52:35,005 INFO RPC interface 'supervisor' initialized
2024-10-27 18:52:35,005 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2024-10-27 18:52:35,006 INFO daemonizing the supervisord process
2024-10-27 18:52:35,007 INFO supervisord started with pid 5333
2024-10-27 18:52:39,863 INFO spawned: 'database' with pid 5444
2024-10-27 18:52:41,806 INFO success: database entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-10-28 11:24:00,910 INFO waiting for database to stop
2024-10-28 11:24:01,552 INFO stopped: database (exit status 0)
2024-10-28 11:24:08,393 INFO RPC interface 'supervisor' initialized
2024-10-28 11:24:08,393 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2024-10-28 11:24:08,395 INFO daemonizing the supervisord process
2024-10-28 11:24:08,395 INFO supervisord started with pid 49888
2024-10-28 11:24:13,267 INFO spawned: 'database' with pid 50005
2024-10-28 11:24:15,202 INFO success: database entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-10-28 12:11:38,120 INFO waiting for database to stop
2024-10-28 12:11:38,608 INFO stopped: database (exit status 0)
2024-10-28 12:11:48,457 INFO RPC interface 'supervisor' initialized
2024-10-28 12:11:48,457 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2024-10-28 12:11:48,458 INFO daemonizing the supervisord process
2024-10-28 12:11:48,459 INFO supervisord started with pid 53515
2024-10-28 12:11:53,317 INFO spawned: 'database' with pid 53634
2024-10-28 12:11:55,229 INFO success: database entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-10-28 16:10:49,469 INFO waiting for database to stop
2024-10-28 16:10:50,101 INFO stopped: database (exit status 0)
2024-10-28 16:11:01,775 INFO RPC interface 'supervisor' initialized
2024-10-28 16:11:01,775 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2024-10-28 16:11:01,776 INFO daemonizing the supervisord process
2024-10-28 16:11:01,777 INFO supervisord started with pid 66822
2024-10-28 16:11:06,637 INFO spawned: 'database' with pid 66933
2024-10-28 16:11:08,554 INFO success: database entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-10-28 16:14:18,485 INFO waiting for database to stop
2024-10-28 16:14:18,675 INFO stopped: database (exit status 0)
(base) [cryosparc_user@r16763 ~]$ cryosparcm env | grep -v LICENSE_ID
(base) [cryosparc_user@r16763 ~]$ cryosparcm env | grep -v LICENSE_ID
export "CRYOSPARC_INSECURE=false"
export "CRYOSPARC_MONGO_FCV=3.6"
export "CRYOSPARC_LIVE_ENABLED=true"
export "CRYOSPARC_HTTP_LIVEAPP_LEGACY_PORT=39006"
export "CRYOSPARC_MONGO_EXTRA_FLAGS="
export "CRYOSPARC_CLICK_WRAP=true"
export "CRYOSPARC_MONGO_PORT=39001"
export "CRYOSPARC_DB_CONNECTION_TIMEOUT_MS=20000"
export "CRYOSPARC_PROJECT_DIR_PREFIX=CS-"
export "CRYOSPARC_DB_ENABLE_AUTH_FLAG=--auth"
export "CRYOSPARC_CLUSTER_JOB_MONITOR_MAX_RETRIES=1000000"
export "CRYOSPARC_DB_ENABLE_AUTH=true"
export "CRYOSPARC_MONGO_CACHE_GB=4"
export "CRYOSPARC_ROOT_DIR=/home/cryosparc_user/software/cryosparc/cryosparc_master"
export "CRYOSPARC_PATH=/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/external/mongodb/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin"
export "CRYOSPARC_DEVELOP=false"
export "CRYOSPARC_FORCE_HOSTNAME=r16763"
export "CRYOSPARC_CONDA_ENV=cryosparc_master_env"
export "CRYOSPARC_BASE_PORT=39000"
export "CRYOSPARC_FORCE_USER=false"
export "CRYOSPARC_HOSTNAME_CHECK=r16763.ttuhsc.edu"
export "CRYOSPARC_CLUSTER_JOB_MONITOR_INTERVAL=10"
export "CRYOSPARC_COMMAND_RTP_PORT=39005"
export "CRYOSPARC_COMMAND_VIS_PORT=39003"
export "CRYOSPARC_DB_PATH=/home/cryosparc_user/software/cryosparc/cryosparc_database"
export "CRYOSPARC_HEARTBEAT_SECONDS=180"
export "CRYOSPARC_COMMAND_CORE_PORT=39002"
export "CRYOSPARC_MASTER_HOSTNAME=r16763"
export "CRYOSPARC_HTTP_APP_PORT=39000"
export "CRYOSPARC_SUPERVISOR_SOCK_FILE=/tmp/cryosparc-supervisor-206773da3c7c06e952eddaffaea9188d.sock"
export "PATH=/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/external/mongodb/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/condabin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/usr/local/relion-4.0/bin:/usr/local/mpich-3.2.1/bin:/usr/local/cuda/bin:/home/cryosparc_user/eman2-sphire-sparx/bin:/home/cryosparc_user/eman2-sphire-sparx/condabin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/.local/bin:/home/cryosparc_user/bin:/usr/local/relion-4.0/bin:/usr/local/mpich-3.2.1/bin:/usr/local/cuda/bin:/usr/local/relion-4.0/bin:/usr/local/mpich-3.2.1/bin:/usr/local/cuda/bin:/usr/local/IMOD/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/usr/local/motioncorr_v2.1/bin:/usr/local/Gctf_v1.18/bin:/usr/local/ResMap:/usr/local/cistem-1.0.0-beta:/usr/local/EMAN_2.91/bin:/usr/local/motioncorr_v2.1/bin:/usr/local/Gctf_v1.18/bin:/usr/local/ResMap:/usr/local/cistem-1.0.0-beta:/usr/local/EMAN_2.91/bin:/usr/local/motioncorr_v2.1/bin:/usr/local/Gctf_v1.18/bin:/usr/local/ResMap:/usr/local/cistem-1.0.0-beta:/usr/local/EMAN_2.91/bin"
export "LD_LIBRARY_PATH=/usr/local/relion-4.0/lib:/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/cuda-11.2/lib:/usr/local/cuda-11.2/lib64:/usr/local/cuda-11.1/lib:/usr/local/cuda-11.1/lib64:/usr/local/cuda-11.0/lib:/usr/local/cuda-11.0/lib64:/usr/local/cuda-10.1/lib:/usr/local/cuda-10.1/lib64:/usr/local/relion-4.0/lib:/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/cuda-11.2/lib:/usr/local/cuda-11.2/lib64:/usr/local/cuda-11.1/lib:/usr/local/cuda-11.1/lib64:/usr/local/cuda-11.0/lib:/usr/local/cuda-11.0/lib64:/usr/local/cuda-10.1/lib:/usr/local/cuda-10.1/lib64:/usr/local/relion-4.0/lib:/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/cuda-11.2/lib:/usr/local/cuda-11.2/lib64:/usr/local/cuda-11.1/lib:/usr/local/cuda-11.1/lib64:/usr/local/cuda-11.0/lib:/usr/local/cuda-11.0/lib64:/usr/local/cuda-10.1/lib:/usr/local/cuda-10.1/lib64:"
export "LD_PRELOAD=/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/libpython3.10.so"
export "PYTHONPATH=/home/cryosparc_user/software/cryosparc/cryosparc_master"
export "PYTHONNOUSERSITE=true"
export "CONDA_SHLVL=1"
export "CONDA_EXE=/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/bin/conda"
export "CONDA_PREFIX=/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env"
export "CONDA_PYTHON_EXE=/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/bin/python"
export "CONDA_PROMPT_MODIFIER=(cryosparc_master_env)"
export "CONDA_DEFAULT_ENV=cryosparc_master_env"
(base) [cryosparc_user@r16763 ~]$
Thanks @Lan for posting these outputs.
Please can you post:
- outputs of the commands
grep -v LICENSE_ID /home/cryosparc_user/software/cryosparc/cryosparc_master/config.sh hostname -f host $(hostname -f) host r16763 curl https://get.cryosparc.com cryosparcm start
- after startup, in a separate shell if startup is “hanging”, outputs:
ps -eo user,pid,ppid,start,command | grep -e cryosparc_ -e mongo ls -l /tmp/mongo*.sock /tmp/cryosparc*.sock curl 127.0.0.1:39001 curl r16763:39001
Does this forum topic concern the same CryoSPARC instance as Can cryosparc work at a new network? If so, can the other topic be closed?
(base) [cryosparc_user@r16763 ~]$ grep -v LICENSE_ID /home/cryosparc_user/software/cryosparc/cryosparc_master/config.sh
export CRYOSPARC_DB_PATH="/home/cryosparc_user/software/cryosparc/cryosparc_database"
export CRYOSPARC_BASE_PORT=39000
export CRYOSPARC_DEVELOP=false
export CRYOSPARC_INSECURE=false
export CRYOSPARC_CLICK_WRAP=true
export CRYOSPARC_MASTER_HOSTNAME="r16763"
export CRYOSPARC_FORCE_HOSTNAME="r16763"
(base) [cryosparc_user@r16763 ~]$ hostname -f
r16763.ttuhsc.edu
(base) [cryosparc_user@r16763 ~]$ host $(hostname -f)
r16763.ttuhsc.edu has address 66.230.10.201
(base) [cryosparc_user@r16763 ~]$ host r16763
r16763.ttuhsc.edu has address 66.230.10.201
(base) [cryosparc_user@r16763 ~]$ curl https://get.cryosparc.com
{"message":"Missing Authentication Token"}(base) [cryosparc_user@r16763 ~]$ cryosparcm start
Starting CryoSPARC System master process...
CryoSPARC is not already running.
configuring database...
configuration complete
database: started
Warning: Could not get database status (attempt 1/3)
Warning: Could not get database status (attempt 2/3)
Warning: Could not get database status (attempt 3/3)
Could not get replica set status; please reconfigure the database with `cryosparcm configuredb`
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/cryosparc_user/software/cryosparc/cryosparc_master/cryosparc_compute/database_management.py", line 270, in check_mongo
admin_db = try_get_pymongo_db(mongo_client)
File "/home/cryosparc_user/software/cryosparc/cryosparc_master/cryosparc_compute/database_management.py", line 251, in try_get_pymongo_db
admin_db.command(({'serverStatus': 1}))
File "/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/_csot.py", line 108, in csot_wrapper
return func(self, *args, **kwargs)
File "/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/database.py", line 893, in command
with self.__client._conn_for_reads(read_preference, session, operation=command_name) as (
File "/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/mongo_client.py", line 1375, in _conn_for_reads
server = self._select_server(read_preference, session, operation)
File "/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/mongo_client.py", line 1322, in _select_server
server = topology.select_server(
File "/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/topology.py", line 368, in select_server
server = self._select_server(
File "/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/topology.py", line 346, in _select_server
servers = self.select_servers(
File "/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/topology.py", line 253, in select_servers
server_descriptions = self._select_servers_loop(
File "/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/topology.py", line 303, in _select_servers_loop
raise ServerSelectionTimeoutError(
pymongo.errors.ServerSelectionTimeoutError: r16763:39001: [Errno 113] No route to host (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms), Timeout: 20.0s, Topology Description: <TopologyDescription id: 6724fa1bbf6b352257f3960b, topology_type: Unknown, servers: [<ServerDescription ('r16763', 39001) server_type: Unknown, rtt: None, error=AutoReconnect('r16763:39001: [Errno 113] No route to host (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms)')>]>
[2024-11-01T10:57:22-05:00] Error checking database. Most recent database log lines:
2024-11-01T10:56:09.939-0500 I REPL [replexec-0] Starting replication reporter thread
2024-11-01T10:56:09.940-0500 I REPL [rsSync] transition to SECONDARY from RECOVERING
2024-11-01T10:56:09.940-0500 I REPL [rsSync] conducting a dry run election to see if we could be elected. current term: 165
2024-11-01T10:56:09.940-0500 I REPL [replexec-0] dry election run succeeded, running for election in term 166
2024-11-01T10:56:09.940-0500 I REPL [replexec-0] election succeeded, assuming primary role in term 166
2024-11-01T10:56:09.940-0500 I REPL [replexec-0] transition to PRIMARY from SECONDARY
2024-11-01T10:56:09.940-0500 I REPL [replexec-0] Resetting sync source to empty, which was :27017
2024-11-01T10:56:09.940-0500 I REPL [replexec-0] Entering primary catch-up mode.
2024-11-01T10:56:09.940-0500 I REPL [replexec-0] Exited primary catch-up mode.
2024-11-01T10:56:11.941-0500 I REPL [rsSync] transition to primary complete; database writes are now permitted
(base) [cryosparc_user@r16763 ~]$ ps -eo user,pid,ppid,start,command | grep -e cryosparc_ -e mongo
cryospa+ 322015 3522 10:56:03 python /home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/supervisord -c /home/cryosparc_user/software/cryosparc/cryosparc_master/supervisord.conf
cryospa+ 322150 322015 10:56:08 mongod --auth --dbpath /home/cryosparc_user/software/cryosparc/cryosparc_database --port 39001 --oplogSize 64 --replSet meteor --wiredTigerCacheSizeGB 4 --bind_ip_all
cryospa+ 323016 322963 10:58:37 grep --color=auto -e cryosparc_ -e mongo
(base) [cryosparc_user@r16763 ~]$ ls -l /tmp/mongo*.sock /tmp/cryosparc*.sock
srwx------. 1 cryosparc_user cryosparc_user 0 Nov 1 10:56 /tmp/cryosparc-supervisor-206773da3c7c06e952eddaffaea9188d.sock
srwx------. 1 cryosparc_user cryosparc_user 0 Nov 1 10:56 /tmp/mongodb-39001.sock
(base) [cryosparc_user@r16763 ~]$ curl 127.0.0.1:39001
It looks like you are trying to access MongoDB over HTTP on the native driver port.
(base) [cryosparc_user@r16763 ~]$ curl r16763:39001
Thank you so much for help!
Yes, the “Can cryosparc work at a new network” can be closed by now. The answer is YES. It works very very!
Lan
This setting is invalid. Allowed values for CRYOSPARC_FORCE_HOSTNAME
are true
or false
(guide).
I recommend removing this setting from this particular CryoSPARC installation altogether, as it may mask a problem that needs to be addressed.
What is the output of the command
ip a | grep -B 2 "inet "
?
(base) [cryosparc_user@r16763 ~]$ ip a | grep -B 2 "inet "
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
--
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 3c:ec:ef:b9:ec:7c brd ff:ff:ff:ff:ff:ff
inet 66.230.10.160/25 brd 66.230.10.255 scope global dynamic noprefixroute eno1
--
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:2c:15:ee brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
(base) [cryosparc_user@r16763 ~]$
Hi, I am re-posting what you requested here :
(base) [cryosparc_user@r16763 ~] ps -eo user,pid,ppid,start,command | grep -e cryosparc_ -e mongo
cryospa+ 332380 332331 12:50:30 grep --color=auto -e cryosparc_ -e mongo
(base) [cryosparc_user@r16763 ~] ls -l /tmp/mongo*.sock /tmp/cryosparc*.sock
ls: cannot access ‘/tmp/mongo*.sock’: No such file or directory
ls: cannot access ‘/tmp/cryosparc*.sock’: No such file or directory
(base) [cryosparc_user@r16763 ~] cryosparcm log database | tail -n 60
2024-11-01T12:46:45.173-0500 I STORAGE [initandlisten] The size storer reports that the oplog contains 34306 records totaling to 61770143 bytes
2024-11-01T12:46:45.173-0500 I STORAGE [initandlisten] Sampling the oplog to determine where to place markers for truncation
2024-11-01T12:46:45.173-0500 I STORAGE [initandlisten] Sampling from the oplog between Oct 25 20:53:52:1 and Nov 1 12:46:43:3 to determine where to place markers for truncation
2024-11-01T12:46:45.173-0500 I STORAGE [initandlisten] Taking 92 samples and assuming that each section of oplog contains approximately 3728 records totaling to 6712501 bytes
2024-11-01T12:46:45.178-0500 I STORAGE [initandlisten] Placing a marker at optime Oct 25 21:09:51:1
2024-11-01T12:46:45.178-0500 I STORAGE [initandlisten] Placing a marker at optime Oct 25 21:24:52:1
2024-11-01T12:46:45.178-0500 I STORAGE [initandlisten] Placing a marker at optime Oct 25 21:41:20:1
2024-11-01T12:46:45.178-0500 I STORAGE [initandlisten] Placing a marker at optime Oct 25 21:58:45:1
2024-11-01T12:46:45.178-0500 I STORAGE [initandlisten] Placing a marker at optime Oct 25 22:16:15:3
2024-11-01T12:46:45.178-0500 I STORAGE [initandlisten] Placing a marker at optime Oct 25 22:35:31:1
2024-11-01T12:46:45.178-0500 I STORAGE [initandlisten] Placing a marker at optime Oct 25 22:43:19:1
2024-11-01T12:46:45.178-0500 I STORAGE [initandlisten] Placing a marker at optime Oct 26 17:29:01:1336
2024-11-01T12:46:45.178-0500 I STORAGE [initandlisten] Placing a marker at optime Oct 27 21:58:53:1
2024-11-01T12:46:45.178-0500 I STORAGE [initandlisten] WiredTiger record store oplog processing took 5ms
2024-11-01T12:46:45.179-0500 I CONTROL [initandlisten]
2024-11-01T12:46:45.179-0500 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine.
2024-11-01T12:46:45.179-0500 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems:
2024-11-01T12:46:45.179-0500 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options]
2024-11-01T12:46:45.179-0500 I CONTROL [initandlisten]
2024-11-01T12:46:45.179-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2024-11-01T12:46:45.179-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2024-11-01T12:46:45.179-0500 I CONTROL [initandlisten]
2024-11-01T12:46:45.187-0500 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/home/cryosparc_user/software/cryosparc/cryosparc_database/diagnostic.data'
2024-11-01T12:46:45.190-0500 I REPL [initandlisten] Rollback ID is 1
2024-11-01T12:46:45.191-0500 I REPL [initandlisten] No oplog entries to apply for recovery. appliedThrough and checkpointTimestamp are both null.
2024-11-01T12:46:45.191-0500 I NETWORK [initandlisten] listening via socket bound to 0.0.0.0
2024-11-01T12:46:45.191-0500 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2024-11-01T12:46:45.191-0500 I NETWORK [initandlisten] listening via socket bound to /tmp/mongodb-39001.sock
2024-11-01T12:46:45.191-0500 I NETWORK [initandlisten] waiting for connections on port 39001
2024-11-01T12:46:45.192-0500 I REPL [replexec-0] New replica set config in use: { _id: "meteor", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:39001", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: 60000, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('62c47eb65c9833fc764f6013') } }
2024-11-01T12:46:45.192-0500 I REPL [replexec-0] This node is localhost:39001 in the config
2024-11-01T12:46:45.192-0500 I REPL [replexec-0] transition to STARTUP2 from STARTUP
2024-11-01T12:46:45.192-0500 I REPL [replexec-0] Starting replication storage threads
2024-11-01T12:46:45.192-0500 I REPL [replexec-0] transition to RECOVERING from STARTUP2
2024-11-01T12:46:45.192-0500 I REPL [replexec-0] Starting replication fetcher thread
2024-11-01T12:46:45.192-0500 I REPL [replexec-0] Starting replication applier thread
2024-11-01T12:46:45.192-0500 I REPL [replexec-0] Starting replication reporter thread
2024-11-01T12:46:45.193-0500 I REPL [rsSync] transition to SECONDARY from RECOVERING
2024-11-01T12:46:45.193-0500 I REPL [rsSync] conducting a dry run election to see if we could be elected. current term: 169
2024-11-01T12:46:45.193-0500 I REPL [replexec-0] dry election run succeeded, running for election in term 170
2024-11-01T12:46:45.193-0500 I REPL [replexec-0] election succeeded, assuming primary role in term 170
2024-11-01T12:46:45.194-0500 I REPL [replexec-0] transition to PRIMARY from SECONDARY
2024-11-01T12:46:45.194-0500 I REPL [replexec-0] Resetting sync source to empty, which was :27017
2024-11-01T12:46:45.194-0500 I REPL [replexec-0] Entering primary catch-up mode.
2024-11-01T12:46:45.194-0500 I REPL [replexec-0] Exited primary catch-up mode.
2024-11-01T12:46:47.194-0500 I REPL [rsSync] transition to primary complete; database writes are now permitted
2024-11-01T12:48:08.820-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2024-11-01T12:48:08.820-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
2024-11-01T12:48:08.820-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-39001.sock
2024-11-01T12:48:08.821-0500 I REPL [signalProcessingThread] shutting down replication subsystems
2024-11-01T12:48:08.821-0500 I REPL [signalProcessingThread] Stopping replication reporter thread
2024-11-01T12:48:08.821-0500 I REPL [signalProcessingThread] Stopping replication fetcher thread
2024-11-01T12:48:08.821-0500 I REPL [signalProcessingThread] Stopping replication applier thread
2024-11-01T12:48:09.202-0500 I REPL [signalProcessingThread] Stopping replication storage threads
2024-11-01T12:48:09.203-0500 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
2024-11-01T12:48:09.206-0500 I STORAGE [WTOplogJournalThread] oplog journal thread loop shutting down
2024-11-01T12:48:09.206-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
2024-11-01T12:48:09.285-0500 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
2024-11-01T12:48:09.285-0500 I CONTROL [signalProcessingThread] now exiting
2024-11-01T12:48:09.285-0500 I CONTROL [signalProcessingThread] shutting down with code:0
(base) [cryosparc_user@r16763 ~] cryosparcm log command_core | tail -n 40
2024-10-26 18:33:24,270 scheduler_run_core INFO | Licenses currently active : 0
2024-10-26 18:33:24,270 scheduler_run_core INFO | Now trying to schedule J1178
2024-10-26 18:33:24,270 scheduler_run_job INFO | Scheduling job to r16763
2024-10-26 18:33:24,652 scheduler_run_job WARNING | Error connecting to cryoSPARC license server. Checking local license file. <urlopen error [Errno -2] Name or service not known>
2024-10-26 18:33:24,655 scheduler_run_core INFO | Queue status waiting_licenses
2024-10-26 18:33:24,655 scheduler_run_core INFO | Queue message Error connecting to cryoSPARC license server. Checking local license file. <urlopen error [Errno -2] Name or service not known>Local license file is expired. Please re-establish your connection to the license servers.
2024-10-26 18:33:24,656 scheduler_run_core INFO | Licenses currently active : 0
2024-10-26 18:33:24,656 scheduler_run_core INFO | Now trying to schedule J66
2024-10-26 18:33:24,656 scheduler_run_job INFO | Scheduling job to r16763
2024-10-26 18:33:25,037 scheduler_run_job WARNING | Error connecting to cryoSPARC license server. Checking local license file. <urlopen error [Errno -2] Name or service not known>
2024-10-26 18:33:25,039 scheduler_run_core INFO | Queue status waiting_licenses
2024-10-26 18:33:25,040 scheduler_run_core INFO | Queue message Error connecting to cryoSPARC license server. Checking local license file. <urlopen error [Errno -2] Name or service not known>Local license file is expired. Please re-establish your connection to the license servers.
2024-10-26 18:33:25,040 scheduler_run_core INFO | Finished
2024-10-26 18:33:30,065 scheduler_run_core INFO | Running…
2024-10-26 18:33:30,066 scheduler_run_core INFO | Jobs Queued: [(‘P14’, ‘J1178’), (‘P15’, ‘J66’)]
2024-10-26 18:33:30,067 scheduler_run_core INFO | Licenses currently active : 0
2024-10-26 18:33:30,067 scheduler_run_core INFO | Now trying to schedule J1178
2024-10-26 18:33:30,067 scheduler_run_job INFO | Scheduling job to r16763
2024-10-26 18:33:34,232 scheduler_run_job INFO | Not a commercial instance - heartbeat set to 12 hours.
2024-10-26 18:33:34,614 scheduler_run_job INFO | Launchable! – Launching.
2024-10-26 18:33:34,617 set_job_status INFO | Status changed for P14.J1178 from queued to launched
2024-10-26 18:33:34,618 app_stats_refresh INFO | Calling app stats refresh url http://r16763:39000/api/actions/stats/refresh_job for project_uid P14, workspace_uid None, job_uid J1178 with body {‘projectUid’: ‘P14’, ‘jobUid’: ‘J1178’}
2024-10-26 18:33:36,624 app_stats_refresh WARNING | Failed to call stats refresh endpoint for P14 J1178: HTTPConnectionPool(host=‘r16763’, port=39000): Max retries exceeded with url: /api/actions/stats/refresh_job (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f9f8019bd30>, ‘Connection to r16763 timed out. (connect timeout=2)’))
2024-10-26 18:33:36,628 run_job INFO | Running P14 J1178
2024-10-26 18:33:36,628 run_job INFO | Running job using: /home/cryosparc_user/software/cryosparc/cryosparc_worker/bin/cryosparcw
2024-10-26 18:33:36,630 scheduler_run_core INFO | Licenses currently active : 1
2024-10-26 18:33:36,630 scheduler_run_core INFO | Now trying to schedule J66
2024-10-26 18:33:36,631 scheduler_run_job INFO | Scheduling job to r16763
2024-10-26 18:33:38,001 scheduler_run_job INFO | Not a commercial instance - heartbeat set to 12 hours.
2024-10-26 18:33:38,379 scheduler_run_job INFO | Launchable! – Launching.
2024-10-26 18:33:38,382 set_job_status INFO | Status changed for P15.J66 from queued to launched
2024-10-26 18:33:38,382 app_stats_refresh INFO | Calling app stats refresh url http://r16763:39000/api/actions/stats/refresh_job for project_uid P15, workspace_uid None, job_uid J66 with body {‘projectUid’: ‘P15’, ‘jobUid’: ‘J66’}
2024-10-26 18:33:40,387 app_stats_refresh WARNING | Failed to call stats refresh endpoint for P15 J66: HTTPConnectionPool(host=‘r16763’, port=39000): Max retries exceeded with url: /api/actions/stats/refresh_job (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f9f8019bb20>, ‘Connection to r16763 timed out. (connect timeout=2)’))
2024-10-26 18:33:40,390 run_job INFO | Running P15 J66
2024-10-26 18:33:40,390 run_job INFO | Running job using: /home/cryosparc_user/software/cryosparc/cryosparc_worker/bin/cryosparcw
2024-10-26 18:33:40,391 scheduler_run_core INFO | Finished
2024-10-26 18:47:37 info INFO | Handling signal: quit
2024-10-26 18:47:37 info INFO | Worker exiting (pid: 4621)
2024-10-26 18:47:37,916 interrupt INFO | === EXITED ===
2024-10-26 18:47:38 info INFO | Shutting down: Master
(base) [cryosparc_user@r16763 ~] cryosparcm log supervisord | tail -n 40
2024-10-28 12:11:48,457 INFO RPC interface 'supervisor' initialized
2024-10-28 12:11:48,457 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2024-10-28 12:11:48,458 INFO daemonizing the supervisord process
2024-10-28 12:11:48,459 INFO supervisord started with pid 53515
2024-10-28 12:11:53,317 INFO spawned: 'database' with pid 53634
2024-10-28 12:11:55,229 INFO success: database entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-10-28 16:10:49,469 INFO waiting for database to stop
2024-10-28 16:10:50,101 INFO stopped: database (exit status 0)
2024-10-28 16:11:01,775 INFO RPC interface 'supervisor' initialized
2024-10-28 16:11:01,775 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2024-10-28 16:11:01,776 INFO daemonizing the supervisord process
2024-10-28 16:11:01,777 INFO supervisord started with pid 66822
2024-10-28 16:11:06,637 INFO spawned: 'database' with pid 66933
2024-10-28 16:11:08,554 INFO success: database entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-10-28 16:14:18,485 INFO waiting for database to stop
2024-10-28 16:14:18,675 INFO stopped: database (exit status 0)
2024-11-01 10:56:04,163 INFO RPC interface 'supervisor' initialized
2024-11-01 10:56:04,163 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2024-11-01 10:56:04,164 INFO daemonizing the supervisord process
2024-11-01 10:56:04,165 INFO supervisord started with pid 322015
2024-11-01 10:56:09,036 INFO spawned: 'database' with pid 322150
2024-11-01 10:56:10,941 INFO success: database entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-11-01 12:44:01,541 INFO waiting for database to stop
2024-11-01 12:44:01,886 INFO stopped: database (exit status 0)
2024-11-01 12:44:04,285 INFO RPC interface 'supervisor' initialized
2024-11-01 12:44:04,285 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2024-11-01 12:44:04,286 INFO daemonizing the supervisord process
2024-11-01 12:44:04,287 INFO supervisord started with pid 330336
2024-11-01 12:44:09,125 INFO spawned: 'database' with pid 330447
2024-11-01 12:44:11,022 INFO success: database entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-11-01 12:46:32,425 INFO waiting for database to stop
2024-11-01 12:46:33,085 INFO stopped: database (exit status 0)
2024-11-01 12:46:39,520 INFO RPC interface 'supervisor' initialized
2024-11-01 12:46:39,520 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2024-11-01 12:46:39,521 INFO daemonizing the supervisord process
2024-11-01 12:46:39,522 INFO supervisord started with pid 330953
2024-11-01 12:46:44,313 INFO spawned: 'database' with pid 331064
2024-11-01 12:46:46,195 INFO success: database entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-11-01 12:48:08,820 INFO waiting for database to stop
2024-11-01 12:48:09,287 INFO stopped: database (exit status 0)
(base) [cryosparc_user@r16763 ~] cryosparcm env | grep -v LICENSE_ID
export “CRYOSPARC_INSECURE=false”
export “CRYOSPARC_MONGO_FCV=3.6”
export “CRYOSPARC_LIVE_ENABLED=true”
export “CRYOSPARC_HTTP_LIVEAPP_LEGACY_PORT=39006”
export “CRYOSPARC_MONGO_EXTRA_FLAGS=”
export “CRYOSPARC_CLICK_WRAP=true”
export “CRYOSPARC_MONGO_PORT=39001”
export “CRYOSPARC_DB_CONNECTION_TIMEOUT_MS=20000”
export “CRYOSPARC_PROJECT_DIR_PREFIX=CS-”
export “CRYOSPARC_DB_ENABLE_AUTH_FLAG=–auth”
export “CRYOSPARC_CLUSTER_JOB_MONITOR_MAX_RETRIES=1000000”
export “CRYOSPARC_DB_ENABLE_AUTH=true”
export “CRYOSPARC_MONGO_CACHE_GB=4”
export “CRYOSPARC_ROOT_DIR=/home/cryosparc_user/software/cryosparc/cryosparc_master”
export “CRYOSPARC_PATH=/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/external/mongodb/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin”
export “CRYOSPARC_DEVELOP=false”
export “CRYOSPARC_FORCE_HOSTNAME=r16763”
export “CRYOSPARC_CONDA_ENV=cryosparc_master_env”
export “CRYOSPARC_BASE_PORT=39000”
export “CRYOSPARC_FORCE_USER=false”
export “CRYOSPARC_HOSTNAME_CHECK=r16763.ttuhsc.edu”
export “CRYOSPARC_CLUSTER_JOB_MONITOR_INTERVAL=10”
export “CRYOSPARC_COMMAND_RTP_PORT=39005”
export “CRYOSPARC_COMMAND_VIS_PORT=39003”
export “CRYOSPARC_DB_PATH=/home/cryosparc_user/software/cryosparc/cryosparc_database”
export “CRYOSPARC_HEARTBEAT_SECONDS=180”
export “CRYOSPARC_COMMAND_CORE_PORT=39002”
export “CRYOSPARC_MASTER_HOSTNAME=r16763.ttuhsc.edu”
export “CRYOSPARC_HTTP_APP_PORT=39000”
export “CRYOSPARC_SUPERVISOR_SOCK_FILE=/tmp/cryosparc-supervisor-206773da3c7c06e952eddaffaea9188d.sock”
export “PATH=/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/external/mongodb/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/condabin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/usr/local/relion-4.0/bin:/usr/local/mpich-3.2.1/bin:/usr/local/cuda/bin:/home/cryosparc_user/eman2-sphire-sparx/bin:/home/cryosparc_user/eman2-sphire-sparx/condabin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/software/cryosparc/cryosparc_master/bin:/home/cryosparc_user/.local/bin:/home/cryosparc_user/bin:/usr/local/relion-4.0/bin:/usr/local/mpich-3.2.1/bin:/usr/local/cuda/bin:/usr/local/relion-4.0/bin:/usr/local/mpich-3.2.1/bin:/usr/local/cuda/bin:/usr/local/IMOD/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/usr/local/motioncorr_v2.1/bin:/usr/local/Gctf_v1.18/bin:/usr/local/ResMap:/usr/local/cistem-1.0.0-beta:/usr/local/EMAN_2.91/bin:/usr/local/motioncorr_v2.1/bin:/usr/local/Gctf_v1.18/bin:/usr/local/ResMap:/usr/local/cistem-1.0.0-beta:/usr/local/EMAN_2.91/bin:/usr/local/motioncorr_v2.1/bin:/usr/local/Gctf_v1.18/bin:/usr/local/ResMap:/usr/local/cistem-1.0.0-beta:/usr/local/EMAN_2.91/bin”
export “LD_LIBRARY_PATH=/usr/local/relion-4.0/lib:/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/cuda-11.2/lib:/usr/local/cuda-11.2/lib64:/usr/local/cuda-11.1/lib:/usr/local/cuda-11.1/lib64:/usr/local/cuda-11.0/lib:/usr/local/cuda-11.0/lib64:/usr/local/cuda-10.1/lib:/usr/local/cuda-10.1/lib64:/usr/local/relion-4.0/lib:/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/cuda-11.2/lib:/usr/local/cuda-11.2/lib64:/usr/local/cuda-11.1/lib:/usr/local/cuda-11.1/lib64:/usr/local/cuda-11.0/lib:/usr/local/cuda-11.0/lib64:/usr/local/cuda-10.1/lib:/usr/local/cuda-10.1/lib64:/usr/local/relion-4.0/lib:/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/cuda-11.2/lib:/usr/local/cuda-11.2/lib64:/usr/local/cuda-11.1/lib:/usr/local/cuda-11.1/lib64:/usr/local/cuda-11.0/lib:/usr/local/cuda-11.0/lib64:/usr/local/cuda-10.1/lib:/usr/local/cuda-10.1/lib64:”
export “LD_PRELOAD=/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/libpython3.10.so”
export “PYTHONPATH=/home/cryosparc_user/software/cryosparc/cryosparc_master”
export “PYTHONNOUSERSITE=true”
export “CONDA_SHLVL=1”
export “CONDA_EXE=/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/bin/conda”
export “CONDA_PREFIX=/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env”
export “CONDA_PYTHON_EXE=/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/bin/python”
export “CONDA_PROMPT_MODIFIER=(cryosparc_master_env)”
export “CONDA_DEFAULT_ENV=cryosparc_master_env”
(base) [cryosparc_user@r16763 ~]$
Does not match the ip address to which your hostname resolves:
You may want to request from your IT support the “permanent” assignment of a hostname to this computer. They may implement this in various ways, but it is important that any request to the assigned hostname will be pointed to this computer (where your CryoSPARC master instance runs).
Following the assignment, the file /home/cryosparc_user/software/cryosparc/cryosparc_master/config.sh
may have to modified.
(base) [cryosparc_user@r16763 ~]$ grep -v LICENSE_ID /home/cryosparc_user/software/cryosparc/cryosparc_master/config.sh
export CRYOSPARC_DB_PATH="/home/cryosparc_user/software/cryosparc/cryosparc_database"
export CRYOSPARC_BASE_PORT=39000
export CRYOSPARC_DEVELOP=false
export CRYOSPARC_INSECURE=false
export CRYOSPARC_CLICK_WRAP=true
export CRYOSPARC_FORCE_HOSTNAME="r16763"
(base) [cryosparc_user@r16763 ~]$ hostname -f
r16763.ttuhsc.edu
(base) [cryosparc_user@r16763 ~]$ host $(hostname -f)
r16763.ttuhsc.edu has address 66.230.10.201
(base) [cryosparc_user@r16763 ~]$ host r16763
r16763.ttuhsc.edu has address 66.230.10.201
(base) [cryosparc_user@r16763 ~]$ curl https://get.cryosparc.com
{"message":"Missing Authentication Token"}(base) [cryosparc_user@r16763 ~]$ cryosparcm start
Starting CryoSPARC System master process...
CryoSPARC is not already running.
configuring database...
configuration complete
database: started
Warning: Could not get database status (attempt 1/3)
Warning: Could not get database status (attempt 2/3)
Warning: Could not get database status (attempt 3/3)
Could not get replica set status; please reconfigure the database with `cryosparcm configuredb`
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/cryosparc_user/software/cryosparc/cryosparc_master/cryosparc_compute/database_management.py", line 270, in check_mongo
admin_db = try_get_pymongo_db(mongo_client)
File "/home/cryosparc_user/software/cryosparc/cryosparc_master/cryosparc_compute/database_management.py", line 251, in try_get_pymongo_db
admin_db.command(({'serverStatus': 1}))
File "/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/_csot.py", line 108, in csot_wrapper
return func(self, *args, **kwargs)
File "/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/database.py", line 893, in command
with self.__client._conn_for_reads(read_preference, session, operation=command_name) as (
File "/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/mongo_client.py", line 1375, in _conn_for_reads
server = self._select_server(read_preference, session, operation)
File "/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/mongo_client.py", line 1322, in _select_server
server = topology.select_server(
File "/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/topology.py", line 368, in select_server
server = self._select_server(
File "/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/topology.py", line 346, in _select_server
servers = self.select_servers(
File "/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/topology.py", line 253, in select_servers
server_descriptions = self._select_servers_loop(
File "/home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/pymongo/topology.py", line 303, in _select_servers_loop
raise ServerSelectionTimeoutError(
pymongo.errors.ServerSelectionTimeoutError: r16763.ttuhsc.edu:39001: [Errno 113] No route to host (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms), Timeout: 20.0s, Topology Description: <TopologyDescription id: 67251584410e80398221af1b, topology_type: Unknown, servers: [<ServerDescription ('r16763.ttuhsc.edu', 39001) server_type: Unknown, rtt: None, error=AutoReconnect('r16763.ttuhsc.edu:39001: [Errno 113] No route to host (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms)')>]>
[2024-11-01T12:54:19-05:00] Error checking database. Most recent database log lines:
2024-11-01T12:53:07.064-0500 I REPL [replexec-0] dry election run succeeded, running for election in term 172
2024-11-01T12:53:07.064-0500 I REPL [replexec-1] election succeeded, assuming primary role in term 172
2024-11-01T12:53:07.064-0500 I REPL [replexec-1] transition to PRIMARY from SECONDARY
2024-11-01T12:53:07.064-0500 I REPL [replexec-1] Resetting sync source to empty, which was :27017
2024-11-01T12:53:07.064-0500 I REPL [replexec-1] Entering primary catch-up mode.
2024-11-01T12:53:07.064-0500 I REPL [replexec-1] Exited primary catch-up mode.
2024-11-01T12:53:09.065-0500 I REPL [rsSync] transition to primary complete; database writes are now permitted
2024-11-01T12:54:17.472-0500 I NETWORK [listener] connection accepted from 127.0.0.1:49762 #1 (1 connection now open)
2024-11-01T12:54:17.472-0500 I NETWORK [conn1] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49762 (connection id: 1)
2024-11-01T12:54:17.472-0500 I NETWORK [conn1] end connection 127.0.0.1:49762 (0 connections now open)
(base) [cryosparc_user@r16763 ~]$
(base) [cryosparc_user@r16763 ~]$
(base) [cryosparc_user@r16763 ~]$
(base) [cryosparc_user@r16763 ~]$ ps -eo user,pid,ppid,start,command | grep -e cryosparc_ -e mongo
cryospa+ 333172 332964 12:52:59 bash /home/cryosparc_user/software/cryosparc/cryosparc_master/bin/cryosparcm start
cryospa+ 333206 3522 12:53:00 python /home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/supervisord -c /home/cryosparc_user/software/cryosparc/cryosparc_master/supervisord.conf
cryospa+ 333317 333206 12:53:05 mongod --auth --dbpath /home/cryosparc_user/software/cryosparc/cryosparc_database --port 39001 --oplogSize 64 --replSet meteor --wiredTigerCacheSizeGB 4 --bind_ip_all
cryospa+ 333391 333172 12:53:07 bash /home/cryosparc_user/software/cryosparc/cryosparc_master/bin/cryosparcm checkdb
cryospa+ 333416 333391 12:53:07 python -c from cryosparc_compute import database_management; database_management.check_mongo()
cryospa+ 333836 333548 12:54:16 grep --color=auto -e cryosparc_ -e mongo
(base) [cryosparc_user@r16763 ~]$ ls -l /tmp/mongo*.sock /tmp/cryosparc*.sock
srwx------. 1 cryosparc_user cryosparc_user 0 Nov 1 12:53 /tmp/cryosparc-supervisor-206773da3c7c06e952eddaffaea9188d.sock
srwx------. 1 cryosparc_user cryosparc_user 0 Nov 1 12:53 /tmp/mongodb-39001.sock
(base) [cryosparc_user@r16763 ~]$ curl 127.0.0.1:39001
It looks like you are trying to access MongoDB over HTTP on the native driver port.
(base) [cryosparc_user@r16763 ~]$ curl r16763:39001
(base) [cryosparc_user@r16763 ~]$
(base) [cryosparc_user@r16763 ~]$
(base) [cryosparc_user@r16763 ~]$ ps -eo user,pid,ppid,start,command | grep -e cryosparc_ -e mongo
cryospa+ 333172 332964 12:52:59 bash /home/cryosparc_user/software/cryosparc/cryosparc_master/bin/cryosparcm start
cryospa+ 333206 3522 12:53:00 python /home/cryosparc_user/software/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/supervisord -c /home/cryosparc_user/software/cryosparc/cryosparc_master/supervisord.conf
cryospa+ 333317 333206 12:53:05 mongod --auth --dbpath /home/cryosparc_user/software/cryosparc/cryosparc_database --port 39001 --oplogSize 64 --replSet meteor --wiredTigerCacheSizeGB 4 --bind_ip_all
cryospa+ 333391 333172 12:53:07 bash /home/cryosparc_user/software/cryosparc/cryosparc_master/bin/cryosparcm checkdb
cryospa+ 333416 333391 12:53:07 python -c from cryosparc_compute import database_management; database_management.check_mongo()
cryospa+ 333836 333548 12:54:16 grep --color=auto -e cryosparc_ -e mongo
(base) [cryosparc_user@r16763 ~]$ ls -l /tmp/mongo*.sock /tmp/cryosparc*.sock
srwx------. 1 cryosparc_user cryosparc_user 0 Nov 1 12:53 /tmp/cryosparc-supervisor-206773da3c7c06e952eddaffaea9188d.sock
srwx------. 1 cryosparc_user cryosparc_user 0 Nov 1 12:53 /tmp/mongodb-39001.sock
(base) [cryosparc_user@r16763 ~]$ curl 127.0.0.1:39001
It looks like you are trying to access MongoDB over HTTP on the native driver port.
(base) [cryosparc_user@r16763 ~]$ curl r16763:39001
curl: (7) Failed to connect to r16763 port 39001: No route to host
(base) [cryosparc_user@r16763 ~]$ ip a | grep -B 2 "inet "
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
--
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 3c:ec:ef:b9:ec:7c brd ff:ff:ff:ff:ff:ff
inet 66.230.10.160/25 brd 66.230.10.255 scope global dynamic noprefixroute eno1
--
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:2c:15:ee brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
(base) [cryosparc_user@r16763 ~]$
HI I changed the ethernet connection and redid all you requested. I am sorry it is messy. Do you find any changes?
I will contact to IT for a fixed IP address. Thank you very much! Lan
This would be one way to do it, if your IT department agrees. You may also want to ask their help in configuring your computer with the correct, corresponding hostname.
Thanks. We used to have fixed IP address for this cryosparc. Our institution had IT problems in the past weeks, which may changes things. It should not be a problem to get it fix. Thank you very much and I will keep you informed. Lan
Dear wtempel:
The dynamic IP address issue has been fixed and the cryosparc works now!!. My question returns to the original:
"
From other machines on the network, access CryoSPARC and CryoSPARC Live at
http://r16763.ttuhsc.edu:39000
"
I could access the GUI from a Mac computer previously, but now, I am not able to do so.
Do you have any suggestions for us to change the setting?
All the best,
Lan
HI wtempel!
The Mac access to cryosparc GUI works now by unknown reasons.
We can close this topic now.
Thanks, Lan