Cryosparc not starting?

Hi,

I started cryosparc by typing cryosparcm start. However, the cryosparc GUI is not opening in web browser. It is just stuck after showing as can be seen in attached screenshot. What is the possible cause?

Screenshot%20from%202019-04-11%2022-08-03

Hi @haaris.safdari,

Can you provide us with a few logs? You can get this by running:

  1. cryosparcm stop
  2. (in another shell) cryosparcm log command_core
  3. (in another shell) cryosparcm log database
  4. cryosparcm start

Please copy and paste the two logs (and censor and sensitive information like LICENSE_ID)

cryosparcm log command_core

  File "cryosparc2_command/command_core/__init__.py", line 637, in get_scheduler_lanes
    return get_config_var('lanes', 'sched_config', fail_notset=False, default=[])
  File "cryosparc2_command/command_core/__init__.py", line 115, in wrapper
    raise e
ServerSelectionTimeoutError: duttacryolab01:39001: [Errno -2] Name or service not known
-----------------------------------------------------
Traceback (most recent call last):
  File "cryosparc2_command/command_core/__init__.py", line 146, in background_worker
    scheduler_run_core() # sets last run time
  File "cryosparc2_command/command_core/__init__.py", line 1160, in scheduler_run_core
    lanes = get_scheduler_lanes()
  File "cryosparc2_command/command_core/__init__.py", line 115, in wrapper
    raise e
ServerSelectionTimeoutError: duttacryolab01:39001: [Errno -2] Name or service not known
Traceback (most recent call last):
  File "cryosparc2_command/command_core/__init__.py", line 151, in background_worker
    check_heartbeats()
  File "cryosparc2_command/command_core/__init__.py", line 1318, in check_heartbeats
    'heartbeat_at' : {'$lt' : deadline} }, {'project_uid' : 1, 'uid' : 1}))
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 1114, in next
    if len(self.__data) or self._refresh():
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 1036, in _refresh
    self.__collation))
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 873, in __send_message
    **kwargs)
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/mongo_client.py", line 888, in _send_message_with_response
    server = topology.select_server(selector)
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/topology.py", line 214, in select_server
    address))
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/topology.py", line 189, in select_servers
    self._error_message(selector))
ServerSelectionTimeoutError: duttacryolab01:39001: [Errno -2] Name or service not known
****** Scheduler Failed **** 
****** Heartbeat check failed ****
[JSONRPC ERROR  2019-04-12 09:35:21.605822  at  get_config_var ]
-----------------------------------------------------
Traceback (most recent call last):
  File "cryosparc2_command/command_core/__init__.py", line 106, in wrapper
    res = func(*args, **kwargs)
  File "cryosparc2_command/command_core/__init__.py", line 317, in get_config_var
    res = mongo.db[colname].find_one({'name' : name})
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/collection.py", line 1102, in find_one
    for result in cursor.limit(-1):
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 1114, in next
    if len(self.__data) or self._refresh():
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 1036, in _refresh
    self.__collation))
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 873, in __send_message
    **kwargs)
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/mongo_client.py", line 888, in _send_message_with_response
    server = topology.select_server(selector)
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/topology.py", line 214, in select_server
    address))
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/topology.py", line 189, in select_servers
    self._error_message(selector))
ServerSelectionTimeoutError: duttacryolab01:39001: [Errno -2] Name or service not known
-----------------------------------------------------
[JSONRPC ERROR  2019-04-12 09:35:21.606170  at  get_scheduler_lanes ]
-----------------------------------------------------
Traceback (most recent call last):
  File "cryosparc2_command/command_core/__init__.py", line 106, in wrapper
    res = func(*args, **kwargs)
  File "cryosparc2_command/command_core/__init__.py", line 637, in get_scheduler_lanes
    return get_config_var('lanes', 'sched_config', fail_notset=False, default=[])
  File "cryosparc2_command/command_core/__init__.py", line 115, in wrapper
    raise e
ServerSelectionTimeoutError: duttacryolab01:39001: [Errno -2] Name or service not known
-----------------------------------------------------
Traceback (most recent call last):
  File "cryosparc2_command/command_core/__init__.py", line 146, in background_worker
    scheduler_run_core() # sets last run time
  File "cryosparc2_command/command_core/__init__.py", line 1160, in scheduler_run_core
    lanes = get_scheduler_lanes()
  File "cryosparc2_command/command_core/__init__.py", line 115, in wrapper
    raise e
ServerSelectionTimeoutError: duttacryolab01:39001: [Errno -2] Name or service not known
Traceback (most recent call last):
  File "cryosparc2_command/command_core/__init__.py", line 151, in background_worker
    check_heartbeats()
  File "cryosparc2_command/command_core/__init__.py", line 1318, in check_heartbeats
    'heartbeat_at' : {'$lt' : deadline} }, {'project_uid' : 1, 'uid' : 1}))
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 1114, in next
    if len(self.__data) or self._refresh():
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 1036, in _refresh
    self.__collation))
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 873, in __send_message
    **kwargs)
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/mongo_client.py", line 888, in _send_message_with_response
    server = topology.select_server(selector)
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/topology.py", line 214, in select_server
    address))
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/topology.py", line 189, in select_servers
    self._error_message(selector))
ServerSelectionTimeoutError: duttacryolab01:39001: [Errno -2] Name or service not known
****** Scheduler Failed **** 
****** Heartbeat check failed ****
[JSONRPC ERROR  2019-04-12 09:36:22.856175  at  get_config_var ]
-----------------------------------------------------
Traceback (most recent call last):
  File "cryosparc2_command/command_core/__init__.py", line 106, in wrapper
    res = func(*args, **kwargs)
  File "cryosparc2_command/command_core/__init__.py", line 317, in get_config_var
    res = mongo.db[colname].find_one({'name' : name})
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/collection.py", line 1102, in find_one
    for result in cursor.limit(-1):
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 1114, in next
    if len(self.__data) or self._refresh():
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 1036, in _refresh
    self.__collation))
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 873, in __send_message
    **kwargs)
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/mongo_client.py", line 888, in _send_message_with_response
    server = topology.select_server(selector)
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/topology.py", line 214, in select_server
    address))
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/topology.py", line 189, in select_servers
    self._error_message(selector))
ServerSelectionTimeoutError: duttacryolab01:39001: [Errno -2] Name or service not known
-----------------------------------------------------
[JSONRPC ERROR  2019-04-12 09:36:22.856537  at  get_scheduler_lanes ]
-----------------------------------------------------
Traceback (most recent call last):
  File "cryosparc2_command/command_core/__init__.py", line 106, in wrapper
    res = func(*args, **kwargs)
  File "cryosparc2_command/command_core/__init__.py", line 637, in get_scheduler_lanes
    return get_config_var('lanes', 'sched_config', fail_notset=False, default=[])
  File "cryosparc2_command/command_core/__init__.py", line 115, in wrapper
    raise e
ServerSelectionTimeoutError: duttacryolab01:39001: [Errno -2] Name or service not known
-----------------------------------------------------
Traceback (most recent call last):
  File "cryosparc2_command/command_core/__init__.py", line 146, in background_worker
    scheduler_run_core() # sets last run time
  File "cryosparc2_command/command_core/__init__.py", line 1160, in scheduler_run_core
    lanes = get_scheduler_lanes()
  File "cryosparc2_command/command_core/__init__.py", line 115, in wrapper
    raise e
ServerSelectionTimeoutError: duttacryolab01:39001: [Errno -2] Name or service not known
Traceback (most recent call last):
  File "cryosparc2_command/command_core/__init__.py", line 151, in background_worker
    check_heartbeats()
  File "cryosparc2_command/command_core/__init__.py", line 1318, in check_heartbeats
    'heartbeat_at' : {'$lt' : deadline} }, {'project_uid' : 1, 'uid' : 1}))
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 1114, in next
    if len(self.__data) or self._refresh():
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 1036, in _refresh
    self.__collation))
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 873, in __send_message
    **kwargs)
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/mongo_client.py", line 888, in _send_message_with_response
    server = topology.select_server(selector)
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/topology.py", line 214, in select_server
    address))
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/topology.py", line 189, in select_servers
    self._error_message(selector))
ServerSelectionTimeoutError: duttacryolab01:39001: [Errno -2] Name or service not known
****** Scheduler Failed **** 
****** Heartbeat check failed ****
[JSONRPC ERROR  2019-04-12 09:37:24.108390  at  get_config_var ]
-----------------------------------------------------
Traceback (most recent call last):
  File "cryosparc2_command/command_core/__init__.py", line 106, in wrapper
    res = func(*args, **kwargs)
  File "cryosparc2_command/command_core/__init__.py", line 317, in get_config_var
    res = mongo.db[colname].find_one({'name' : name})
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/collection.py", line 1102, in find_one
    for result in cursor.limit(-1):
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 1114, in next
    if len(self.__data) or self._refresh():
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 1036, in _refresh
    self.__collation))
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 873, in __send_message
    **kwargs)
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/mongo_client.py", line 888, in _send_message_with_response
    server = topology.select_server(selector)
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/topology.py", line 214, in select_server
    address))
  File "/data1/cryosparc2_haaris/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/topology.py", line 189, in select_servers
    self._error_message(selector))
ServerSelectionTimeoutError: duttacryolab01:39001: [Errno -2] Name or service not known
-----------------------------------------------------
[JSONRPC ERROR  2019-04-12 09:37:24.108775  at  get_scheduler_lanes ]
-----------------------------------------------------
Traceback (most recent call last):
  File "cryosparc2_command/command_core/__init__.py", line 106, in wrapper
    res = func(*args, **kwargs)
  File "cryosparc2_command/command_core/__init__.py", line 637, in get_scheduler_lanes
    return get_config_var('lanes', 'sched_config', fail_notset=False, default=[])
  File "cryosparc2_command/command_core/__init__.py", line 115, in wrapper
    raise e
ServerSelectionTimeoutError: duttacryolab01:39001: [Errno -2] Name or service not known
-----------------------------------------------------
Traceback (most recent call last):
  File "cryosparc2_command/command_core/__init__.py", line 146, in background_worker
    scheduler_run_core() # sets last run time
  File "cryosparc2_command/command_core/__init__.py", line 1160, in scheduler_run_core
    lanes = get_scheduler_lanes()
  File "cryosparc2_command/command_core/__init__.py", line 115, in wrapper
    raise e
ServerSelectionTimeoutError: duttacryolab01:39001: [Errno -2] Name or service not known

Hi @haaris.safdari,

Is this the first time you’re starting cryoSPARC?
It seems cryoSPARC command is trying to connect to the mongodb database, but it can’t find it.
Can you post the logs of cryosparcm log database?

1 Like

Yes, it is the first time.

**cryosparcm log database**
2019-04-11T21:54:14.405+0530 I REPL     [replExecDBWorker-2] Starting replication applier thread
2019-04-11T21:54:14.406+0530 I REPL     [replExecDBWorker-2] Starting replication reporter thread
2019-04-11T21:54:14.406+0530 I REPL     [rsSync] transition to RECOVERING
2019-04-11T21:54:14.407+0530 I REPL     [rsSync] transition to SECONDARY
2019-04-11T21:54:14.407+0530 I REPL     [rsSync] conducting a dry run election to see if we could be elected
2019-04-11T21:54:14.407+0530 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
2019-04-11T21:54:14.415+0530 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 9
2019-04-11T21:54:14.415+0530 I REPL     [ReplicationExecutor] transition to PRIMARY
2019-04-11T21:54:14.415+0530 I REPL     [ReplicationExecutor] Entering primary catch-up mode.
2019-04-11T21:54:14.415+0530 I REPL     [ReplicationExecutor] Exited primary catch-up mode.
2019-04-11T21:54:15.544+0530 I NETWORK  [thread1] connection accepted from 127.0.0.1:44690 #1 (1 connection now open)
2019-04-11T21:54:15.544+0530 I NETWORK  [conn1] received client metadata from 127.0.0.1:44690 conn1: { driver: { name: "PyMongo", version: "3.4.0" }, os: { type: "Linux", name: "CentOS Linux 7.6.1810 Core", architecture: "x86_64", version: "3.10.0-957.10.1.el7.x86_64" }, platform: "CPython 2.7.15.final.0" }
2019-04-11T21:54:15.547+0530 I NETWORK  [thread1] connection accepted from 127.0.0.1:44692 #2 (2 connections now open)
2019-04-11T21:54:15.547+0530 I NETWORK  [conn2] received client metadata from 127.0.0.1:44692 conn2: { driver: { name: "PyMongo", version: "3.4.0" }, os: { type: "Linux", name: "CentOS Linux 7.6.1810 Core", architecture: "x86_64", version: "3.10.0-957.10.1.el7.x86_64" }, platform: "CPython 2.7.15.final.0" }
2019-04-11T21:54:16.071+0530 I -        [conn2] end connection 127.0.0.1:44692 (2 connections now open)
2019-04-11T21:54:16.071+0530 I -        [conn1] end connection 127.0.0.1:44690 (2 connections now open)
2019-04-11T21:54:16.167+0530 I NETWORK  [thread1] connection accepted from 127.0.0.1:44694 #3 (1 connection now open)
2019-04-11T21:54:16.167+0530 I NETWORK  [conn3] received client metadata from 127.0.0.1:44694 conn3: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.4.10-4-g67ee356c6b" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.10.1.el7.x86_64" } }
2019-04-11T21:54:16.170+0530 I REPL     [conn3] replSetInitiate admin command received from client
2019-04-11T21:54:16.174+0530 I -        [conn3] end connection 127.0.0.1:44694 (1 connection now open)
2019-04-11T21:54:16.408+0530 I REPL     [rsSync] transition to primary complete; database writes are now permitted
2019-04-11T21:59:31.599+0530 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2019-04-11T21:59:31.599+0530 I NETWORK  [signalProcessingThread] shutdown: going to close listening sockets...
2019-04-11T21:59:31.599+0530 I NETWORK  [signalProcessingThread] closing listening socket: 6
2019-04-11T21:59:31.599+0530 I NETWORK  [signalProcessingThread] closing listening socket: 7
2019-04-11T21:59:31.599+0530 I NETWORK  [signalProcessingThread] removing socket file: /tmp/mongodb-39001.sock
2019-04-11T21:59:31.599+0530 I NETWORK  [signalProcessingThread] shutdown: going to flush diaglog...
2019-04-11T21:59:31.599+0530 I REPL     [signalProcessingThread] shutting down replication subsystems
2019-04-11T21:59:31.599+0530 I REPL     [signalProcessingThread] Stopping replication reporter thread
2019-04-11T21:59:31.599+0530 I REPL     [signalProcessingThread] Stopping replication fetcher thread
2019-04-11T21:59:31.599+0530 I REPL     [signalProcessingThread] Stopping replication applier thread
2019-04-11T21:59:32.436+0530 I REPL     [signalProcessingThread] Stopping replication storage threads
2019-04-11T21:59:32.437+0530 I FTDC     [signalProcessingThread] Shutting down full-time diagnostic data capture
2019-04-11T21:59:32.443+0530 I STORAGE  [signalProcessingThread] WiredTigerKVEngine shutting down
2019-04-11T21:59:32.511+0530 I STORAGE  [signalProcessingThread] shutdown: removing fs lock...
2019-04-11T21:59:32.511+0530 I CONTROL  [signalProcessingThread] now exiting
2019-04-11T21:59:32.511+0530 I CONTROL  [signalProcessingThread] shutting down with code:0
2019-04-11T21:59:32.511+0530 I CONTROL  [initandlisten] shutting down with code:0
2019-04-11T21:59:34.575+0530 I CONTROL  [initandlisten] MongoDB starting : pid=32283 port=39001 dbpath=/data1/cryosparc2_haaris/cryosparc2_database 64-bit host=somnath.mbu
2019-04-11T21:59:34.575+0530 I CONTROL  [initandlisten] db version v3.4.10-4-g67ee356c6b
2019-04-11T21:59:34.575+0530 I CONTROL  [initandlisten] git version: 67ee356c6be377cda547d16423daef5beb4e8377
2019-04-11T21:59:34.575+0530 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2p  14 Aug 2018
2019-04-11T21:59:34.575+0530 I CONTROL  [initandlisten] allocator: tcmalloc
2019-04-11T21:59:34.575+0530 I CONTROL  [initandlisten] modules: none
2019-04-11T21:59:34.575+0530 I CONTROL  [initandlisten] build environment:
2019-04-11T21:59:34.575+0530 I CONTROL  [initandlisten]     distarch: x86_64
2019-04-11T21:59:34.575+0530 I CONTROL  [initandlisten]     target_arch: x86_64
2019-04-11T21:59:34.575+0530 I CONTROL  [initandlisten] options: { net: { port: 39001 }, replication: { oplogSizeMB: 64, replSet: "meteor" }, storage: { dbPath: "/data1/cryosparc2_haaris/cryosparc2_database", journal: { enabled: false }, wiredTiger: { engineConfig: { cacheSizeGB: 4.0 } } } }
2019-04-11T21:59:34.610+0530 I -        [initandlisten] Detected data files in /data1/cryosparc2_haaris/cryosparc2_database created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2019-04-11T21:59:34.610+0530 I STORAGE  [initandlisten] 
2019-04-11T21:59:34.610+0530 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-04-11T21:59:34.610+0530 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2019-04-11T21:59:34.610+0530 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=4096M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),,log=(enabled=false),
2019-04-11T21:59:34.645+0530 I STORAGE  [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.rs
2019-04-11T21:59:34.646+0530 I STORAGE  [initandlisten] The size storer reports that the oplog contains 164198 records totaling to 16003741 bytes
2019-04-11T21:59:34.646+0530 I STORAGE  [initandlisten] Sampling from the oplog between Mar 14 12:51:30:1 and Apr 11 21:59:26:1 to determine where to place markers for truncation
2019-04-11T21:59:34.646+0530 I STORAGE  [initandlisten] Taking 23 samples and assuming that each section of oplog contains approximately 68854 records totaling to 6710931 bytes
2019-04-11T21:59:34.647+0530 I STORAGE  [initandlisten] Placing a marker at optime Mar 22 15:29:33:1
2019-04-11T21:59:34.647+0530 I STORAGE  [initandlisten] Placing a marker at optime Apr  2 03:15:41:1
2019-04-11T21:59:34.727+0530 I CONTROL  [initandlisten] 
2019-04-11T21:59:34.727+0530 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-04-11T21:59:34.727+0530 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2019-04-11T21:59:34.727+0530 I CONTROL  [initandlisten] 
2019-04-11T21:59:34.728+0530 I CONTROL  [initandlisten] 
2019-04-11T21:59:34.728+0530 I CONTROL  [initandlisten] ** WARNING: You are running on a NUMA machine.
2019-04-11T21:59:34.728+0530 I CONTROL  [initandlisten] **          We suggest launching mongod like this to avoid performance problems:
2019-04-11T21:59:34.728+0530 I CONTROL  [initandlisten] **              numactl --interleave=all mongod [other options]
2019-04-11T21:59:34.728+0530 I CONTROL  [initandlisten] 
2019-04-11T21:59:34.728+0530 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2019-04-11T21:59:34.728+0530 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-04-11T21:59:34.728+0530 I CONTROL  [initandlisten] 
2019-04-11T21:59:34.728+0530 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2019-04-11T21:59:34.728+0530 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-04-11T21:59:34.728+0530 I CONTROL  [initandlisten] 
2019-04-11T21:59:34.751+0530 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data1/cryosparc2_haaris/cryosparc2_database/diagnostic.data'
2019-04-11T21:59:34.752+0530 I NETWORK  [thread1] waiting for connections on port 39001
2019-04-11T21:59:34.755+0530 I REPL     [replExecDBWorker-0] New replica set config in use: { _id: "meteor", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:39001", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: 60000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5c8a00fa29d86d2932543bb3') } }
2019-04-11T21:59:34.755+0530 I REPL     [replExecDBWorker-0] This node is localhost:39001 in the config
2019-04-11T21:59:34.755+0530 I REPL     [replExecDBWorker-0] transition to STARTUP2
2019-04-11T21:59:34.755+0530 I REPL     [replExecDBWorker-0] Starting replication storage threads
2019-04-11T21:59:34.756+0530 I REPL     [replExecDBWorker-0] Starting replication fetcher thread
2019-04-11T21:59:34.756+0530 I REPL     [replExecDBWorker-0] Starting replication applier thread
2019-04-11T21:59:34.756+0530 I REPL     [replExecDBWorker-0] Starting replication reporter thread
2019-04-11T21:59:34.756+0530 I REPL     [rsSync] transition to RECOVERING
2019-04-11T21:59:34.757+0530 I REPL     [rsSync] transition to SECONDARY
2019-04-11T21:59:34.757+0530 I REPL     [rsSync] conducting a dry run election to see if we could be elected
2019-04-11T21:59:34.757+0530 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
2019-04-11T21:59:34.778+0530 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 10
2019-04-11T21:59:34.778+0530 I REPL     [ReplicationExecutor] transition to PRIMARY
2019-04-11T21:59:34.778+0530 I REPL     [ReplicationExecutor] Entering primary catch-up mode.
2019-04-11T21:59:34.778+0530 I REPL     [ReplicationExecutor] Exited primary catch-up mode.
2019-04-11T21:59:35.901+0530 I NETWORK  [thread1] connection accepted from 127.0.0.1:44736 #1 (1 connection now open)
2019-04-11T21:59:35.902+0530 I NETWORK  [conn1] received client metadata from 127.0.0.1:44736 conn1: { driver: { name: "PyMongo", version: "3.4.0" }, os: { type: "Linux", name: "CentOS Linux 7.6.1810 Core", architecture: "x86_64", version: "3.10.0-957.10.1.el7.x86_64" }, platform: "CPython 2.7.15.final.0" }
2019-04-11T21:59:35.904+0530 I NETWORK  [thread1] connection accepted from 127.0.0.1:44738 #2 (2 connections now open)
2019-04-11T21:59:35.904+0530 I NETWORK  [conn2] received client metadata from 127.0.0.1:44738 conn2: { driver: { name: "PyMongo", version: "3.4.0" }, os: { type: "Linux", name: "CentOS Linux 7.6.1810 Core", architecture: "x86_64", version: "3.10.0-957.10.1.el7.x86_64" }, platform: "CPython 2.7.15.final.0" }
2019-04-11T21:59:36.429+0530 I -        [conn2] end connection 127.0.0.1:44738 (2 connections now open)
2019-04-11T21:59:36.429+0530 I -        [conn1] end connection 127.0.0.1:44736 (2 connections now open)
2019-04-11T21:59:36.530+0530 I NETWORK  [thread1] connection accepted from 127.0.0.1:44740 #3 (1 connection now open)
2019-04-11T21:59:36.530+0530 I NETWORK  [conn3] received client metadata from 127.0.0.1:44740 conn3: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.4.10-4-g67ee356c6b" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.10.1.el7.x86_64" } }
2019-04-11T21:59:36.532+0530 I REPL     [conn3] replSetInitiate admin command received from client
2019-04-11T21:59:36.535+0530 I -        [conn3] end connection 127.0.0.1:44740 (1 connection now open)
2019-04-11T21:59:36.758+0530 I REPL     [rsSync] transition to primary complete; database writes are now permitted
2019-04-11T22:07:30.980+0530 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2019-04-11T22:07:30.980+0530 I NETWORK  [signalProcessingThread] shutdown: going to close listening sockets...
2019-04-11T22:07:30.980+0530 I NETWORK  [signalProcessingThread] closing listening socket: 6
2019-04-11T22:07:30.980+0530 I NETWORK  [signalProcessingThread] closing listening socket: 7
2019-04-11T22:07:30.980+0530 I NETWORK  [signalProcessingThread] removing socket file: /tmp/mongodb-39001.sock
2019-04-11T22:07:30.980+0530 I NETWORK  [signalProcessingThread] shutdown: going to flush diaglog...
2019-04-11T22:07:30.980+0530 I REPL     [signalProcessingThread] shutting down replication subsystems
2019-04-11T22:07:30.980+0530 I REPL     [signalProcessingThread] Stopping replication reporter thread
2019-04-11T22:07:30.980+0530 I REPL     [signalProcessingThread] Stopping replication fetcher thread
2019-04-11T22:07:30.980+0530 I REPL     [signalProcessingThread] Stopping replication applier thread
2019-04-11T22:07:31.802+0530 I REPL     [signalProcessingThread] Stopping replication storage threads
2019-04-11T22:07:31.803+0530 I FTDC     [signalProcessingThread] Shutting down full-time diagnostic data capture
2019-04-11T22:07:31.806+0530 I STORAGE  [signalProcessingThread] WiredTigerKVEngine shutting down
2019-04-11T22:07:31.861+0530 I STORAGE  [signalProcessingThread] shutdown: removing fs lock...
2019-04-11T22:07:31.861+0530 I CONTROL  [signalProcessingThread] now exiting
2019-04-11T22:07:31.861+0530 I CONTROL  [signalProcessingThread] shutting down with code:0
2019-04-11T22:07:31.862+0530 I CONTROL  [initandlisten] shutting down with code:0
2019-04-11T22:07:37.850+0530 I CONTROL  [initandlisten] MongoDB starting : pid=28710 port=39001 dbpath=/data1/cryosparc2_haaris/cryosparc2_database 64-bit host=somnath.mbu
2019-04-11T22:07:37.850+0530 I CONTROL  [initandlisten] db version v3.4.10-4-g67ee356c6b
2019-04-11T22:07:37.850+0530 I CONTROL  [initandlisten] git version: 67ee356c6be377cda547d16423daef5beb4e8377
2019-04-11T22:07:37.850+0530 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2p  14 Aug 2018
2019-04-11T22:07:37.850+0530 I CONTROL  [initandlisten] allocator: tcmalloc
2019-04-11T22:07:37.850+0530 I CONTROL  [initandlisten] modules: none
2019-04-11T22:07:37.850+0530 I CONTROL  [initandlisten] build environment:
2019-04-11T22:07:37.850+0530 I CONTROL  [initandlisten]     distarch: x86_64
2019-04-11T22:07:37.850+0530 I CONTROL  [initandlisten]     target_arch: x86_64
2019-04-11T22:07:37.850+0530 I CONTROL  [initandlisten] options: { net: { port: 39001 }, replication: { oplogSizeMB: 64, replSet: "meteor" }, storage: { dbPath: "/data1/cryosparc2_haaris/cryosparc2_database", journal: { enabled: false }, wiredTiger: { engineConfig: { cacheSizeGB: 4.0 } } } }
2019-04-11T22:07:37.885+0530 I -        [initandlisten] Detected data files in /data1/cryosparc2_haaris/cryosparc2_database created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2019-04-11T22:07:37.885+0530 I STORAGE  [initandlisten] 
2019-04-11T22:07:37.885+0530 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-04-11T22:07:37.885+0530 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2019-04-11T22:07:37.885+0530 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=4096M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),,log=(enabled=false),
2019-04-11T22:07:37.934+0530 I STORAGE  [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.rs
2019-04-11T22:07:37.934+0530 I STORAGE  [initandlisten] The size storer reports that the oplog contains 164246 records totaling to 16008347 bytes
2019-04-11T22:07:37.934+0530 I STORAGE  [initandlisten] Sampling from the oplog between Mar 14 12:51:30:1 and Apr 11 22:07:26:1 to determine where to place markers for truncation
2019-04-11T22:07:37.934+0530 I STORAGE  [initandlisten] Taking 23 samples and assuming that each section of oplog contains approximately 68854 records totaling to 6710901 bytes
2019-04-11T22:07:37.935+0530 I STORAGE  [initandlisten] Placing a marker at optime Mar 22 15:29:33:1
2019-04-11T22:07:37.935+0530 I STORAGE  [initandlisten] Placing a marker at optime Apr  2 03:15:41:1
2019-04-11T22:07:38.032+0530 I CONTROL  [initandlisten] 
2019-04-11T22:07:38.032+0530 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-04-11T22:07:38.032+0530 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2019-04-11T22:07:38.032+0530 I CONTROL  [initandlisten] 
2019-04-11T22:07:38.033+0530 I CONTROL  [initandlisten] 
2019-04-11T22:07:38.033+0530 I CONTROL  [initandlisten] ** WARNING: You are running on a NUMA machine.
2019-04-11T22:07:38.033+0530 I CONTROL  [initandlisten] **          We suggest launching mongod like this to avoid performance problems:
2019-04-11T22:07:38.033+0530 I CONTROL  [initandlisten] **              numactl --interleave=all mongod [other options]
2019-04-11T22:07:38.033+0530 I CONTROL  [initandlisten] 
2019-04-11T22:07:38.033+0530 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2019-04-11T22:07:38.033+0530 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-04-11T22:07:38.034+0530 I CONTROL  [initandlisten] 
2019-04-11T22:07:38.034+0530 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2019-04-11T22:07:38.034+0530 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-04-11T22:07:38.034+0530 I CONTROL  [initandlisten] 
2019-04-11T22:07:38.057+0530 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data1/cryosparc2_haaris/cryosparc2_database/diagnostic.data'
2019-04-11T22:07:38.058+0530 I NETWORK  [thread1] waiting for connections on port 39001
2019-04-11T22:07:38.060+0530 I REPL     [replExecDBWorker-0] New replica set config in use: { _id: "meteor", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:39001", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: 60000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5c8a00fa29d86d2932543bb3') } }
2019-04-11T22:07:38.060+0530 I REPL     [replExecDBWorker-0] This node is localhost:39001 in the config
2019-04-11T22:07:38.060+0530 I REPL     [replExecDBWorker-0] transition to STARTUP2
2019-04-11T22:07:38.060+0530 I REPL     [replExecDBWorker-0] Starting replication storage threads
2019-04-11T22:07:38.061+0530 I REPL     [replExecDBWorker-0] Starting replication fetcher thread
2019-04-11T22:07:38.061+0530 I REPL     [replExecDBWorker-0] Starting replication applier thread
2019-04-11T22:07:38.061+0530 I REPL     [replExecDBWorker-0] Starting replication reporter thread
2019-04-11T22:07:38.061+0530 I REPL     [rsSync] transition to RECOVERING
2019-04-11T22:07:38.062+0530 I REPL     [rsSync] transition to SECONDARY
2019-04-11T22:07:38.062+0530 I REPL     [rsSync] conducting a dry run election to see if we could be elected
2019-04-11T22:07:38.062+0530 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
2019-04-11T22:07:38.070+0530 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 11
2019-04-11T22:07:38.070+0530 I REPL     [ReplicationExecutor] transition to PRIMARY
2019-04-11T22:07:38.070+0530 I REPL     [ReplicationExecutor] Entering primary catch-up mode.
2019-04-11T22:07:38.070+0530 I REPL     [ReplicationExecutor] Exited primary catch-up mode.
2019-04-11T22:07:39.220+0530 I NETWORK  [thread1] connection accepted from 127.0.0.1:44766 #1 (1 connection now open)
2019-04-11T22:07:39.221+0530 I NETWORK  [conn1] received client metadata from 127.0.0.1:44766 conn1: { driver: { name: "PyMongo", version: "3.4.0" }, os: { type: "Linux", name: "CentOS Linux 7.6.1810 Core", architecture: "x86_64", version: "3.10.0-957.10.1.el7.x86_64" }, platform: "CPython 2.7.15.final.0" }
2019-04-11T22:07:39.222+0530 I NETWORK  [thread1] connection accepted from 127.0.0.1:44768 #2 (2 connections now open)
2019-04-11T22:07:39.223+0530 I NETWORK  [conn2] received client metadata from 127.0.0.1:44768 conn2: { driver: { name: "PyMongo", version: "3.4.0" }, os: { type: "Linux", name: "CentOS Linux 7.6.1810 Core", architecture: "x86_64", version: "3.10.0-957.10.1.el7.x86_64" }, platform: "CPython 2.7.15.final.0" }
2019-04-11T22:07:39.747+0530 I -        [conn2] end connection 127.0.0.1:44768 (2 connections now open)
2019-04-11T22:07:39.747+0530 I -        [conn1] end connection 127.0.0.1:44766 (2 connections now open)
2019-04-11T22:07:39.851+0530 I NETWORK  [thread1] connection accepted from 127.0.0.1:44770 #3 (1 connection now open)
2019-04-11T22:07:39.851+0530 I NETWORK  [conn3] received client metadata from 127.0.0.1:44770 conn3: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.4.10-4-g67ee356c6b" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.10.1.el7.x86_64" } }
2019-04-11T22:07:39.853+0530 I REPL     [conn3] replSetInitiate admin command received from client
2019-04-11T22:07:39.856+0530 I -        [conn3] end connection 127.0.0.1:44770 (1 connection now open)
2019-04-11T22:07:40.063+0530 I REPL     [rsSync] transition to primary complete; database writes are now permitted
2019-04-12T09:37:42.849+0530 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2019-04-12T09:37:42.864+0530 I NETWORK  [signalProcessingThread] shutdown: going to close listening sockets...
2019-04-12T09:37:42.864+0530 I NETWORK  [signalProcessingThread] closing listening socket: 6
2019-04-12T09:37:42.864+0530 I NETWORK  [signalProcessingThread] closing listening socket: 7
2019-04-12T09:37:42.870+0530 I NETWORK  [signalProcessingThread] removing socket file: /tmp/mongodb-39001.sock
2019-04-12T09:37:42.871+0530 I NETWORK  [signalProcessingThread] shutdown: going to flush diaglog...
2019-04-12T09:37:42.871+0530 I REPL     [signalProcessingThread] shutting down replication subsystems
2019-04-12T09:37:42.871+0530 I REPL     [signalProcessingThread] Stopping replication reporter thread
2019-04-12T09:37:42.899+0530 I REPL     [signalProcessingThread] Stopping replication fetcher thread
2019-04-12T09:37:42.899+0530 I REPL     [signalProcessingThread] Stopping replication applier thread
2019-04-12T09:37:42.974+0530 I REPL     [signalProcessingThread] Stopping replication storage threads
2019-04-12T09:37:43.012+0530 I FTDC     [signalProcessingThread] Shutting down full-time diagnostic data capture
2019-04-12T09:37:43.034+0530 I STORAGE  [signalProcessingThread] WiredTigerKVEngine shutting down
2019-04-12T09:37:43.142+0530 I STORAGE  [signalProcessingThread] shutdown: removing fs lock...
2019-04-12T09:37:43.142+0530 I CONTROL  [signalProcessingThread] now exiting
2019-04-12T09:37:43.142+0530 I CONTROL  [signalProcessingThread] shutting down with code:0
2019

Hi @haaris.safdari,

It seems like you’re using CentOS 7.6. Please take a look at this post about opening ports that cryoSPARC uses: Webapp connection refused on CentOS