Cryosparc 2.15 install halted on Centos 7.8

I tried to follow the instructions for standalone installation on my linux machine running CentOS 7.8
In the middle of the installation process, the log file shows the following error message:
% cryosparcm log command_core
ServerSelectionTimeoutError: [hostname]:39001: timed out
****** Scheduler Failed ****
****** Heartbeat check failed ****

I opened the ports 39000-39010 using firewall-cmd.

Hi @sangho, this error indicates that cryoSPARC started but could not connect to the database. Can you also send the database logs from this command?

cryosparcm log database

Hi @nfrasser, thanks for your comment. Below is the database log (I trimmed out time stamp columns for clarity):

I CONTROL  [initandlisten] MongoDB starting : pid=21651 port=39001 dbpath=/home/sangho/cryosparc/cryosparc2_database 64-bit host=sangho.skku.edu
I CONTROL  [initandlisten] db version v3.4.10-4-g67ee356c6b
I CONTROL  [initandlisten] git version: 67ee356c6be377cda547d16423daef5beb4e8377
I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2p  14 Aug 2018
I CONTROL  [initandlisten] allocator: tcmalloc
I CONTROL  [initandlisten] modules: none
I CONTROL  [initandlisten] build environment:
I CONTROL  [initandlisten]     distarch: x86_64
I CONTROL  [initandlisten]     target_arch: x86_64
I CONTROL  [initandlisten] options: { net: { port: 39001 }, replication: { oplogSizeMB: 64, replSet: "meteor" }, storage: { dbPath: "/home/sangho/cryosparc/cryosparc2_datab
se }, wiredTiger: { engineConfig: { cacheSizeGB: 4.0 } } } }
I STORAGE  [initandlisten] 
I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=4096M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=
th=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),,log=(enabled=false),
I CONTROL  [initandlisten] 
I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
I CONTROL  [initandlisten] 
I CONTROL  [initandlisten] 
I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
I CONTROL  [initandlisten] **        We suggest setting it to 'never'
I CONTROL  [initandlisten] 
I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
I CONTROL  [initandlisten] **        We suggest setting it to 'never'
I CONTROL  [initandlisten] 
I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/home/sangho/cryosparc/cryosparc2_database/diagnostic.data'
I REPL     [initandlisten] Did not find local voted for document at startup.
I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument: Did not find replica set configuration document in local.s

I NETWORK  [thread1] waiting for connections on port 39001
I NETWORK  [thread1] connection accepted from 127.0.0.1:46322 #1 (1 connection now open)
I NETWORK  [conn1] received client metadata from 127.0.0.1:46322 conn1: { driver: { name: "PyMongo", version: "3.4.0" }, os: { type: "Linux", name: "CentOS Linux 7.8.2003 C
 version: "3.10.0-1127.13.1.el7.x86_64" }, platform: "CPython 2.7.15.final.0" }
I NETWORK  [thread1] connection accepted from 127.0.0.1:46324 #2 (2 connections now open)
I NETWORK  [conn2] received client metadata from 127.0.0.1:46324 conn2: { driver: { name: "PyMongo", version: "3.4.0" }, os: { type: "Linux", name: "CentOS Linux 7.8.2003 C
 version: "3.10.0-1127.13.1.el7.x86_64" }, platform: "CPython 2.7.15.final.0" }
I -        [conn2] end connection 127.0.0.1:46324 (2 connections now open)
I -        [conn1] end connection 127.0.0.1:46322 (2 connections now open)
I NETWORK  [thread1] connection accepted from 127.0.0.1:46326 #3 (1 connection now open)
I NETWORK  [conn3] received client metadata from 127.0.0.1:46326 conn3: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.4.1
e: "Linux", name: "CentOS Linux release 7.8.2003 (Core)", architecture: "x86_64", version: "Kernel 3.10.0-1127.13.1.el7.x86_64" } }
I REPL     [conn3] replSetInitiate admin command received from client
I REPL     [conn3] replSetInitiate config object with 1 members parses ok
I REPL     [conn3] ******
I REPL     [conn3] creating replication oplog of size: 64MB...
I STORAGE  [conn3] Starting WiredTigerRecordStoreThread local.oplog.rs
I STORAGE  [conn3] The size storer reports that the oplog contains 0 records totaling to 0 bytes
I STORAGE  [conn3] Scanning the oplog to determine where to place markers for truncation
I REPL     [conn3] ******
I INDEX    [conn3] build index on: admin.system.version properties: { v: 2, key: { version: 1 }, name: "incompatible_with_version_32", ns: "admin.system.version" }
I INDEX    [conn3] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
I COMMAND  [conn3] setting featureCompatibilityVersion to 3.4
I REPL     [conn3] New replica set config in use: { _id: "meteor", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:39001", arbiterOnly: false, buildIn
riority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 1000
0, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5f1648a6dccea7342692126d') } }
I REPL     [conn3] This node is localhost:39001 in the config
I REPL     [conn3] transition to STARTUP2
I REPL     [conn3] Starting replication storage threads
I REPL     [conn3] Starting replication fetcher thread
I REPL     [conn3] Starting replication applier thread
I REPL     [conn3] Starting replication reporter thread
I REPL     [rsSync] transition to RECOVERING
I COMMAND  [conn3] command local.replset.minvalid appName: "MongoDB Shell" command: replSetInitiate { v: 2, key: { version: 1 }, ns: "admin.system.version", name: "incompat
elds:0 reslen:22 locks:{ Global: { acquireCount: { r: 14, w: 8, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 70 } }, Database: { acquireCount: { r: 2, w: 2
uireCount: { r: 2, w: 2 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 1807ms
I REPL     [rsSync] transition to SECONDARY
I REPL     [rsSync] conducting a dry run election to see if we could be elected
I REPL     [ReplicationExecutor] dry election run succeeded, running for election
I -        [conn3] end connection 127.0.0.1:46326 (1 connection now open)
I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
I REPL     [ReplicationExecutor] transition to PRIMARY
I REPL     [ReplicationExecutor] Entering primary catch-up mode.
I REPL     [ReplicationExecutor] Exited primary catch-up mode.
I REPL     [rsSync] transition to primary complete; database writes are now permitted

Hi @sangho, apologies for the delay. Can you also send me the output this command?

cryosparcm status

Important: Please delete the line that begins with export CRYOSPARC_LICENSE_ID= before you post.

Hi @nfrasser, my apologies for the delay. Thanks to your kindness and help from other local people, I was able to resolve the issue. The culprit was that my hostname was not properly mapped to the assigned IP address because my institution had a policy to place all computers serving as a web server behind the web firewall. As the result, my hostname was linked to an IP different from what it was supposed to be. Personnel in the security team of my institution provided me an additional hostname with the same IP as the original one. Then everything worked!

Thank you again for your kindness in helping me resolve this issue.

1 Like

Hi @sangho, no problem at all, Iā€™m glad you got everything working!