Issues starting cryosparc: 500 Internal Server Error

I’ve tried re-installing multiple times. During startup I see the following:

CryoSPARC is not already running.
database: started
command_core: started
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>

command_vis: started
command_proxy: started
webapp: started

status shows the following:

CryoSPARC System master node installed at
/n/www/cryosparc-moazed.rc.hms.harvard.edu/cryosparc2_master
Current cryoSPARC version: v2.15.0
----------------------------------------------------------------------------

cryosparcm process status:

app                              STOPPED   Not started
app_dev                          STOPPED   Not started
command_core                     RUNNING   pid 67916, uptime 0:03:46
command_proxy                    RUNNING   pid 67938, uptime 0:03:42
command_rtp                      STOPPED   Not started
command_vis                      RUNNING   pid 70698, uptime 0:00:04
database                         RUNNING   pid 67856, uptime 0:03:49
watchdog_dev                     STOPPED   Not started
webapp                           RUNNING   pid 70734, uptime 0:00:01
webapp_dev                       STOPPED   Not started

----------------------------------------------------------------------------

global config variables:

export CRYOSPARC_LICENSE_ID="XXX"
export CRYOSPARC_MASTER_HOSTNAME="XXX"
export CRYOSPARC_DB_PATH="XXX"
export CRYOSPARC_BASE_PORT=8601
export CRYOSPARC_DEVELOP=false
export CRYOSPARC_INSECURE=false
export CRYOSPARC_CLICK_WRAP=true
export CRYOSPARC_FORCE_HOSTNAME=true

cryosparcm log command_core shows the following:

-----------------------------------------------------
Traceback (most recent call last):
  File "cryosparc2_command/command_core/__init__.py", line 200, in background_worker
    concurrent_job_monitor()
  File "cryosparc2_command/command_core/__init__.py", line 1527, in concurrent_job_monitor
    current_concurrent_licenses_deque.append(get_num_active_licenses())
  File "cryosparc2_command/command_core/__init__.py", line 124, in wrapper
    raise e
OperationFailure: node is not in primary or recovering state
Traceback (most recent call last):
  File "cryosparc2_command/command_core/__init__.py", line 205, in background_worker
    heartbeat_manager()
  File "cryosparc2_command/command_core/__init__.py", line 1571, in heartbeat_manager
    active_jobs = get_active_licenses()
  File "cryosparc2_command/command_core/__init__.py", line 1536, in get_active_licenses
    for j in jobs_running:
  File "/n/www/cryosparc-moazed.rc.hms.harvard.edu/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 1114, in next
    if len(self.__data) or self._refresh():
  File "/n/www/cryosparc-moazed.rc.hms.harvard.edu/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 1036, in _refresh
    self.__collation))
  File "/n/www/cryosparc-moazed.rc.hms.harvard.edu/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 928, in __send_message
    helpers._check_command_response(doc['data'][0])
  File "/n/www/cryosparc-moazed.rc.hms.harvard.edu/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/helpers.py", line 210, in _check_command_response
    raise OperationFailure(msg % errmsg, code, response)
OperationFailure: node is not in primary or recovering state
****** Concurrent job monitor failed ****
****** Instance heartbeat failed ****

cryosparcm log webapp

MongoError: not master and slaveOk=false
    at Function.MongoError.create (/n/www/cryosparc-moazed.rc.hms.harvard.edu/cryosparc2_master/cryosparc2_webapp/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/error.js:31:11)
    at queryCallback (/n/www/cryosparc-moazed.rc.hms.harvard.edu/cryosparc2_master/cryosparc2_webapp/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/cursor.js:212:36)
    at /n/www/cryosparc-moazed.rc.hms.harvard.edu/cryosparc2_master/cryosparc2_webapp/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/pool.js:469:18
    at _combinedTickCallback (internal/process/next_tick.js:131:7)
    at process._tickDomainCallback (internal/process/next_tick.js:218:9)

/n/www/cryosparc-moazed.rc.hms.harvard.edu/cryosparc2_master/cryosparc2_webapp/bundle/programs/server/node_modules/fibers/future.js:313
                                                throw(ex);

I have tried the following:

  1. Reinstalling from scratch
  2. cryosparcm update
  3. cryosparcm forcedeps

I’m working on automating installing Cryosparc on a cluster for Lab and is it possible that I’ve hit some license restrictions as I’ve probably reinstalled the app 10 or 20 times. :slight_smile:

Kind regards
– Mick

1 Like

I requested a new license in case this is a licensing issue.

Hi @mick-t,
Just confirming that non-profit academic licenses for cryoSPARC do not have a licensing limitation in terms of the number of times you can install/re-install the software using a single license. However, you will need multiple separate license IDs if you wish to install and run multiple instances of cryoSPARC concurrently (i.e., a single license ID cannot be used to concurrently run multiple instances of the software).

1 Like

Hi spunjani,

Thanks for your reply, it’s installed on one single instance.

I see this in the logs:

./database.log:30:2020-10-13T14:52:55.215-0400 W NETWORK [replExecDBWorker-0] Failed to connect to 127.0.0.1: **8622** , in(checking socket for error after poll), reason: Connection refused

It should be using 8602:

mongod 87822 moazed 6u IPv4 25807568 0t0 TCP *:8602 (LISTEN) mongod 87822 moazed 38u IPv4 25807591 0t0 TCP app-cryosparc-prod01:8602->app-cryosparc-prod01:39214 (ESTABLISHED) mongod 87822 moazed 39u IPv4 25807707 0t0 TCP app-cryosparc-prod01:8602->app-cryosparc-prod01:39272 (ESTABLISHED) mongod 87822 moazed 40u IPv4 25808989 0t0 TCP localhost:8602->localhost:42278 (ESTABLISHED)

Removing all the files in the cryosprac database fixed the problem. There must have been old cached files there from when I was testing the instance on a different port range.

2 Likes