Mongo/Webapp error - Webapp continuosly restarts upon startup

I have been attempting to install Sparc on a new CentOS 7 system. I have and tried with system wide installation of node, mongo, and conda and also local versions. But every time I run into the following errors upon “cryosparc start”. I’ve followed the “CentOS SELinux configuration for cryoSPARC” thread. SELinux is disabled, port 38000 is opened in the firewall. I’ve tried disabling the firewall all together, and I’m not sure what else I could do. Thank you for your help.

Part of supervisor.log, message is repeating with different pid:

2018-01-11 10:02:09,631 INFO spawned: ‘webapp’ with pid 42702
2018-01-11 10:02:10,643 INFO success: webapp entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-01-11 10:02:11,372 INFO exited: webapp (exit status 1; not expected)
2018-01-11 10:02:12,376 INFO spawned: ‘webapp’ with pid 42713
2018-01-11 10:02:13,378 INFO success: webapp entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-01-11 10:02:14,132 INFO exited: webapp (exit status 1; not expected)

From meteor.stdout, error is repeating:

/home/spuser/cryosparc/cryosparc-webapp/bundle/programs/server/node_modules/fibers/future.js:313
throw(ex);
^

MongoError: failed to connect to server [localhost:38001] on first connect
at Object.Future.wait (/home/spuser/cryosparc/cryosparc-webapp/bundle/programs/server/node_modules/fibers/future.js:449:15)
at new MongoConnection (packages/mongo/mongo_driver.js:219:27)
at new MongoInternals.RemoteCollectionDriver (packages/mongo/remote_collection_driver.js:4:16)
at Object. (packages/mongo/remote_collection_driver.js:38:10)
at Object.defaultRemoteCollectionDriver (packages/underscore.js:784:19)
at new Mongo.Collection (packages/mongo/collection.js:103:40)
at AccountsServer.AccountsCommon (packages/accounts-base/accounts_common.js:23:18)
at new AccountsServer (packages/accounts-base/accounts_server.js:18:5)
at meteorInstall.node_modules.meteor.accounts-base.server_main.js (packages/accounts-base/server_main.js:9:12)
at fileEvaluate (packages/modules-runtime.js:188:9)

at [object Object].<anonymous> (/home/spuser/cryosparc/cryosparc-webapp/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/topologies/server.js:325:35)
at emitOne (events.js:77:13)
at [object Object].emit (events.js:169:7)
at [object Object].<anonymous> (/home/spuser/cryosparc/cryosparc-webapp/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/pool.js:270:12)
at [object Object].g (events.js:260:16)
at emitTwo (events.js:87:13)
at [object Object].emit (events.js:172:7)
at Socket.<anonymous> (/home/spuser/cryosparc/cryosparc-webapp/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/connection.js:173:49)
at Socket.g (events.js:260:16)
at emitOne (events.js:77:13

And from mongod.stdout:

2018-01-11T09:44:08.581-0800 I CONTROL [initandlisten] MongoDB starting : pid=38592 port=38001 dbpath=/home/spuser/cryosparc/run/db 64-bit host=spgpu
2018-01-11T09:44:08.581-0800 I CONTROL [initandlisten] db version v3.2.9
2018-01-11T09:44:08.581-0800 I CONTROL [initandlisten] git version: 22ec9e93b40c85fc7cae7d56e7d6a02fd811088c
2018-01-11T09:44:08.581-0800 I CONTROL [initandlisten] allocator: tcmalloc
2018-01-11T09:44:08.581-0800 I CONTROL [initandlisten] modules: none
2018-01-11T09:44:08.582-0800 I CONTROL [initandlisten] build environment:
2018-01-11T09:44:08.582-0800 I CONTROL [initandlisten] distarch: x86_64
2018-01-11T09:44:08.582-0800 I CONTROL [initandlisten] target_arch: x86_64
2018-01-11T09:44:08.582-0800 I CONTROL [initandlisten] options: { net: { port: 38001 }, replication: { oplogSizeMB: 64, replSet: “meteor” }, storage: { dbPath: “/home/spuser/cryosparc/run/db”, journal: { enabled: false } } }
2018-01-11T09:44:08.632-0800 I - [initandlisten] Detected data files in /home/spuser/cryosparc/run/db created by the ‘wiredTiger’ storage engine, so setting the active storage engine to ‘wiredTiger’.
2018-01-11T09:44:08.633-0800 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=74G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),log=(enabled=false),
2018-01-11T09:44:08.660-0800 I STORAGE [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.rs
2018-01-11T09:44:08.660-0800 I STORAGE [initandlisten] The size storer reports that the oplog contains 17 records totaling to 1590 bytes
2018-01-11T09:44:08.660-0800 I STORAGE [initandlisten] Scanning the oplog to determine where to place markers for truncation
2018-01-11T09:44:08.677-0800 I CONTROL [initandlisten]
2018-01-11T09:44:08.677-0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is ‘always’.
2018-01-11T09:44:08.677-0800 I CONTROL [initandlisten] ** We suggest setting it to ‘never’
2018-01-11T09:44:08.677-0800 I CONTROL [initandlisten]
2018-01-11T09:44:08.677-0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is ‘always’.
2018-01-11T09:44:08.677-0800 I CONTROL [initandlisten] ** We suggest setting it to ‘never’
2018-01-11T09:44:08.677-0800 I CONTROL [initandlisten]
2018-01-11T09:44:08.681-0800 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory ‘/home/spuser/cryosparc/run/db/diagnostic.data’
2018-01-11T09:44:08.681-0800 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2018-01-11T09:44:08.682-0800 I REPL [ReplicationExecutor] New replica set config in use: { _id: “meteor”, version: 1, protocolVersion: 1, members: [ { _id: 0, host: “localhost:38001”, arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId(‘5a56ab8a2ea9287eb050846e’) } }
2018-01-11T09:44:08.682-0800 I REPL [ReplicationExecutor] This node is localhost:38001 in the config
2018-01-11T09:44:08.682-0800 I REPL [ReplicationExecutor] transition to STARTUP2
2018-01-11T09:44:08.682-0800 I REPL [ReplicationExecutor] Starting replication applier threads
2018-01-11T09:44:08.683-0800 I NETWORK [initandlisten] waiting for connections on port 38001
2018-01-11T09:44:08.683-0800 I REPL [ReplicationExecutor] transition to RECOVERING
2018-01-11T09:44:08.684-0800 I REPL [ReplicationExecutor] transition to SECONDARY
2018-01-11T09:44:08.684-0800 I REPL [ReplicationExecutor] conducting a dry run election to see if we could be elected
2018-01-11T09:44:08.684-0800 I REPL [ReplicationExecutor] dry election run succeeded, running for election
2018-01-11T09:44:08.685-0800 I REPL [ReplicationExecutor] election succeeded, assuming primary role in term 17
2018-01-11T09:44:08.685-0800 I REPL [ReplicationExecutor] transition to PRIMARY
2018-01-11T09:44:09.703-0800 I REPL [rsSync] transition to primary complete; database writes are now permitted
2018-01-11T09:44:09.857-0800 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52942 #1 (1 connection now open)
2018-01-11T09:44:09.861-0800 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52944 #2 (2 connections now open)
2018-01-11T09:44:10.404-0800 I NETWORK [conn1] end connection 127.0.0.1:52942 (1 connection now open)
2018-01-11T09:44:10.404-0800 I NETWORK [conn2] end connection 127.0.0.1:52944 (1 connection now open)
2018-01-11T09:44:10.507-0800 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52946 #3 (1 connection now open)
2018-01-11T09:44:10.509-0800 I REPL [conn3] replSetInitiate admin command received from client
2018-01-11T09:44:10.512-0800 I NETWORK [conn3] end connection 127.0.0.1:52946 (0 connections now open)
2018-01-11T09:44:10.614-0800 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52948 #4 (1 connection now open)
2018-01-11T09:44:10.616-0800 I COMMAND [conn4] CMD: drop meteor.cache_lock
2018-01-11T09:44:10.618-0800 I NETWORK [conn4] end connection 127.0.0.1:52948 (0 connections now open)
2018-01-11T09:44:16.082-0800 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52954 #5 (1 connection now open)
2018-01-11T09:44:16.086-0800 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52956 #6 (2 connections now open)
2018-01-11T09:44:16.711-0800 I NETWORK [conn6] end connection 127.0.0.1:52956 (1 connection now open)
2018-01-11T09:44:16.711-0800 I NETWORK [conn5] end connection 127.0.0.1:52954 (1 connection now open)
2018-01-11T09:46:14.178-0800 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2018-01-11T09:46:14.178-0800 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
2018-01-11T09:46:14.182-0800 I REPL [signalProcessingThread] Stopping replication applier threads
2018-01-11T09:46:15.765-0800 I CONTROL [signalProcessingThread] now exiting
2018-01-11T09:46:15.765-0800 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets…
2018-01-11T09:46:15.765-0800 I NETWORK [signalProcessingThread] closing listening socket: 5
2018-01-11T09:46:15.765-0800 I NETWORK [signalProcessingThread] closing listening socket: 6
2018-01-11T09:46:15.765-0800 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-38001.sock
2018-01-11T09:46:15.766-0800 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog…
2018-01-11T09:46:15.766-0800 I NETWORK [signalProcessingThread] shutdown: going to close sockets…
2018-01-11T09:46:15.766-0800 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
2018-01-11T09:46:15.774-0800 I STORAGE [signalProcessingThread] shutdown: removing fs lock…
2018-01-11T09:46:15.775-0800 I CONTROL [signalProcessingThread] dbexit: rc: 0

Solved. The issue was with the /etc/hosts file, somehow it did not have proper ipv4 address for localhost. This was likely resulting in the “MongoError: failed to connect to server [localhost:38001] on first connect” error.

Best,

Andrey