CryoSPARC installation stuck at "command_core: started"

Dear CryoSPARC team,

I have troubles to get CryoSPARC running. I installed the software following the guide for a single workstation.
Unfortunately, the installation process was stuck over night at “command_core: started”.

The same issue persists, if I use cryosparcm restart.

cryosparcm status

CryoSPARC System master node installed at
/home/cemkguest/software/cryosparc/cryosparc_master
Current cryoSPARC version: v4.7.1

CryoSPARC process status:

app STOPPED Not started
app_api STOPPED Not started
app_api_dev STOPPED Not started
command_core RUNNING pid 11969, uptime 0:07:48
command_rtp STOPPED Not started
command_vis STOPPED Not started
database RUNNING pid 11865, uptime 0:07:52


An error ocurred while checking license status
Could not get license verification status. Are all CryoSPARC processes RUNNING?

A few helpful outputs for you:
curl https://get.cryosparc.com/checklicenseexists/$LICENSE_ID
{“success”: true}

cryosparcm log database | tail -n 30

2025-07-03T11:07:30.026+0200 I REPL [replexec-0] Starting replication fetcher thread
2025-07-03T11:07:30.026+0200 I REPL [replexec-0] Starting replication applier thread
2025-07-03T11:07:30.026+0200 I REPL [replexec-0] Starting replication reporter thread
2025-07-03T11:07:30.027+0200 I REPL [rsSync] transition to SECONDARY from RECOVERING
2025-07-03T11:07:30.027+0200 I REPL [rsSync] conducting a dry run election to see if we could be elected. current term: 7
2025-07-03T11:07:30.027+0200 I REPL [replexec-0] dry election run succeeded, running for election in term 8
2025-07-03T11:07:30.027+0200 I REPL [replexec-0] election succeeded, assuming primary role in term 8
2025-07-03T11:07:30.027+0200 I REPL [replexec-0] transition to PRIMARY from SECONDARY
2025-07-03T11:07:30.027+0200 I REPL [replexec-0] Resetting sync source to empty, which was :27017
2025-07-03T11:07:30.027+0200 I REPL [replexec-0] Entering primary catch-up mode.
2025-07-03T11:07:30.027+0200 I REPL [replexec-0] Exited primary catch-up mode.
2025-07-03T11:07:31.431+0200 I NETWORK [listener] connection accepted from 129.132.174.233:43816 #1 (1 connection now open)
2025-07-03T11:07:31.431+0200 I NETWORK [conn1] received client metadata from 129.132.174.233:43816 conn1: { driver: { name: “PyMongo”, version: “4.8.0” }, os: { type: “Linux”, name: “Linux”, architecture: “x86_64”, version: “6.8.0-47-generic” }, platform: “CPython 3.10.14.final.0” }
2025-07-03T11:07:31.432+0200 I NETWORK [conn1] end connection 129.132.174.233:43816 (0 connections now open)
2025-07-03T11:07:31.432+0200 I NETWORK [listener] connection accepted from 127.0.0.1:43120 #2 (1 connection now open)
2025-07-03T11:07:31.433+0200 I NETWORK [conn2] received client metadata from 127.0.0.1:43120 conn2: { driver: { name: “PyMongo”, version: “4.8.0” }, os: { type: “Linux”, name: “Linux”, architecture: “x86_64”, version: “6.8.0-47-generic” }, platform: “CPython 3.10.14.final.0” }
2025-07-03T11:07:32.027+0200 I REPL [rsSync] transition to primary complete; database writes are now permitted
2025-07-03T11:07:32.435+0200 I NETWORK [listener] connection accepted from 127.0.0.1:43132 #3 (2 connections now open)
2025-07-03T11:07:32.435+0200 I NETWORK [conn3] received client metadata from 127.0.0.1:43132 conn3: { driver: { name: “PyMongo”, version: “4.8.0” }, os: { type: “Linux”, name: “Linux”, architecture: “x86_64”, version: “6.8.0-47-generic” }, platform: “CPython 3.10.14.final.0” }
2025-07-03T11:07:32.438+0200 I ACCESS [conn3] Successfully authenticated as principal cryosparc_admin on admin from client 127.0.0.1:43132
2025-07-03T11:07:32.439+0200 I NETWORK [conn2] end connection 127.0.0.1:43120 (1 connection now open)
2025-07-03T11:07:32.439+0200 I NETWORK [conn3] end connection 127.0.0.1:43132 (0 connections now open)
2025-07-03T11:07:38.447+0200 I NETWORK [listener] connection accepted from 129.132.174.233:59968 #4 (1 connection now open)
2025-07-03T11:07:38.447+0200 I NETWORK [conn4] received client metadata from 129.132.174.233:59968 conn4: { driver: { name: “PyMongo”, version: “4.8.0” }, os: { type: “Linux”, name: “Linux”, architecture: “x86_64”, version: “6.8.0-47-generic” }, platform: “CPython 3.10.14.final.0” }
2025-07-03T11:07:38.448+0200 I NETWORK [conn4] end connection 129.132.174.233:59968 (0 connections now open)
2025-07-03T11:07:38.448+0200 I NETWORK [listener] connection accepted from 127.0.0.1:49346 #5 (1 connection now open)
2025-07-03T11:07:38.448+0200 I NETWORK [conn5] received client metadata from 127.0.0.1:49346 conn5: { driver: { name: “PyMongo”, version: “4.8.0” }, os: { type: “Linux”, name: “Linux”, architecture: “x86_64”, version: “6.8.0-47-generic” }, platform: “CPython 3.10.14.final.0” }
2025-07-03T11:07:38.448+0200 I NETWORK [listener] connection accepted from 127.0.0.1:49358 #6 (2 connections now open)
2025-07-03T11:07:38.448+0200 I NETWORK [conn6] received client metadata from 127.0.0.1:49358 conn6: { driver: { name: “PyMongo”, version: “4.8.0” }, os: { type: “Linux”, name: “Linux”, architecture: “x86_64”, version: “6.8.0-47-generic” }, platform: “CPython 3.10.14.final.0” }
2025-07-03T11:07:38.450+0200 I ACCESS [conn6] Successfully authenticated as principal cryosparc_user on admin from client 127.0.0.1:49358

Welcome to the forum @Tamino .
Various network-related configurations could contribute to this problem.
Please can you

  1. post the output of the command
    /home/cemkguest/software/cryosparc/cryosparc_master/bin/cryosparcm call env | grep -i proxy
    
  2. while command_core is RUNNING, collect and post the output of these commands in a fresh command shell
    eval $(/home/cemkguest/software/cryosparc/cryosparc_master/bin/cryosparcm env)
    curl 127.0.0.1:$CRYOSPARC_COMMAND_CORE_PORT
    curl -v ${CRYOSPARC_MASTER_HOSTNAME}:$CRYOSPARC_COMMAND_CORE_PORT
    # record outputs, then
    exit
    

thank you @wtempel. Yes, it might very well be that our network configuration prevent correct running of cryosparc.

I gladly provide you with the output you requested. Please let me know if you need anything else!

cemkguest@phobos:~/software/cryosparc/cryosparc_master$ eval $(/home/cemkguest/software/cryosparc/cryosparc_master/bin/cryosparcm env)
curl 127.0.0.1:$CRYOSPARC_COMMAND_CORE_PORT
curl -v ${CRYOSPARC_MASTER_HOSTNAME}:$CRYOSPARC_COMMAND_CORE_PORT
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html><head>
<meta type="copyright" content="Copyright (C) 1996-2021 The Squid Software Foundation and contributors">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>ERROR: The requested URL could not be retrieved</title>
<style type="text/css"><!--
 /*
 * Copyright (C) 1996-2021 The Squid Software Foundation and contributors
 *
 * Squid software is distributed under GPLv2+ license and includes
 * contributions from numerous individuals and organizations.
 * Please see the COPYING and CONTRIBUTORS files for details.
 */

/*
 Stylesheet for Squid Error pages
 Adapted from design by Free CSS Templates
 http://www.freecsstemplates.org
 Released for free under a Creative Commons Attribution 2.5 License
*/

/* Page basics */
* {
	font-family: verdana, sans-serif;
}

html body {
	margin: 0;
	padding: 0;
	background: #efefef;
	font-size: 12px;
	color: #1e1e1e;
}

/* Page displayed title area */
#titles {
	margin-left: 15px;
	padding: 10px;
	padding-left: 100px;
	background: url('/squid-internal-static/icons/SN.png') no-repeat left;
}

/* initial title */
#titles h1 {
	color: #000000;
}
#titles h2 {
	color: #000000;
}

/* special event: FTP success page titles */
#titles ftpsuccess {
	background-color:#00ff00;
	width:100%;
}

/* Page displayed body content area */
#content {
	padding: 10px;
	background: #ffffff;
}

/* General text */
p {
}

/* error brief description */
#error p {
}

/* some data which may have caused the problem */
#data {
}

/* the error message received from the system or other software */
#sysmsg {
}

pre {
}

/* special event: FTP directory listing */
#dirmsg {
    font-family: courier, monospace;
    color: black;
    font-size: 10pt;
}
#dirlisting {
    margin-left: 2%;
    margin-right: 2%;
}
#dirlisting tr.entry td.icon,td.filename,td.size,td.date {
    border-bottom: groove;
}
#dirlisting td.size {
    width: 50px;
    text-align: right;
    padding-right: 5px;
}

/* horizontal lines */
hr {
	margin: 0;
}

/* page displayed footer area */
#footer {
	font-size: 9px;
	padding-left: 10px;
}


body
:lang(fa) { direction: rtl; font-size: 100%; font-family: Tahoma, Roya, sans-serif; float: right; }
:lang(he) { direction: rtl; }
 --></style>
</head><body id=ERR_ACCESS_DENIED>
<div id="titles">
<h1>ERROR</h1>
<h2>The requested URL could not be retrieved</h2>
</div>
<hr>

<div id="content">
<p>The following error was encountered while trying to retrieve the URL: <a href="http://127.0.0.1:40002/">http://127.0.0.1:40002/</a></p>

<blockquote id="error">
<p><b>Access Denied.</b></p>
</blockquote>

<p>Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect.</p>

<p>Your cache administrator is <a href="mailto:root?subject=CacheErrorInfo%20-%20ERR_ACCESS_DENIED&amp;body=CacheHost%3A%20proxybd.ethz.ch%0D%0AErrPage%3A%20ERR_ACCESS_DENIED%0D%0AErr%3A%20%5Bnone%5D%0D%0ATimeStamp%3A%20Fri,%2004%20Jul%202025%2010%3A51%3A24%20GMT%0D%0A%0D%0AClientIP%3A%20129.132.174.233%0D%0A%0D%0AHTTP%20Request%3A%0D%0AGET%20%2F%20HTTP%2F1.1%0AUser-Agent%3A%20curl%2F8.5.0%0D%0AAccept%3A%20*%2F*%0D%0AProxy-Connection%3A%20Keep-Alive%0D%0AHost%3A%20127.0.0.1%3A40002%0D%0A%0D%0A%0D%0A">root</a>.</p>
<br>
</div>

<hr>
<div id="footer">
<p>Generated Fri, 04 Jul 2025 10:51:24 GMT by proxybd.ethz.ch (squid/4.15)</p>
<!-- ERR_ACCESS_DENIED -->
</div>
</body></html>
* Uses proxy env variable http_proxy == 'http://proxy.ethz.ch:3128/'
* Host proxy.ethz.ch:3128 was resolved.
* IPv6: (none)
* IPv4: 129.132.202.155
*   Trying 129.132.202.155:3128...
* Connected to proxy.ethz.ch (129.132.202.155) port 3128
> GET http://phobos:40002/ HTTP/1.1
> Host: phobos:40002
> User-Agent: curl/8.5.0
> Accept: */*
> Proxy-Connection: Keep-Alive
> 
< HTTP/1.1 503 Service Unavailable
< Server: squid/4.15
< Mime-Version: 1.0
< Date: Fri, 04 Jul 2025 10:51:24 GMT
< Content-Type: text/html;charset=utf-8
< Content-Length: 3703
< X-Squid-Error: ERR_DNS_FAIL 0
< Vary: Accept-Language
< Content-Language: en
< X-Cache: MISS from proxybd.ethz.ch
< X-Cache-Lookup: MISS from proxybd.ethz.ch:3128
< Via: 1.1 proxybd.ethz.ch (squid/4.15)
< Connection: close
< 
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html><head>
<meta type="copyright" content="Copyright (C) 1996-2021 The Squid Software Foundation and contributors">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>ERROR: The requested URL could not be retrieved</title>
<style type="text/css"><!-- 
 /*
 * Copyright (C) 1996-2021 The Squid Software Foundation and contributors
 *
 * Squid software is distributed under GPLv2+ license and includes
 * contributions from numerous individuals and organizations.
 * Please see the COPYING and CONTRIBUTORS files for details.
 */

/*
 Stylesheet for Squid Error pages
 Adapted from design by Free CSS Templates
 http://www.freecsstemplates.org
 Released for free under a Creative Commons Attribution 2.5 License
*/

/* Page basics */
* {
	font-family: verdana, sans-serif;
}

html body {
	margin: 0;
	padding: 0;
	background: #efefef;
	font-size: 12px;
	color: #1e1e1e;
}

/* Page displayed title area */
#titles {
	margin-left: 15px;
	padding: 10px;
	padding-left: 100px;
	background: url('/squid-internal-static/icons/SN.png') no-repeat left;
}

/* initial title */
#titles h1 {
	color: #000000;
}
#titles h2 {
	color: #000000;
}

/* special event: FTP success page titles */
#titles ftpsuccess {
	background-color:#00ff00;
	width:100%;
}

/* Page displayed body content area */
#content {
	padding: 10px;
	background: #ffffff;
}

/* General text */
p {
}

/* error brief description */
#error p {
}

/* some data which may have caused the problem */
#data {
}

/* the error message received from the system or other software */
#sysmsg {
}

pre {
}

/* special event: FTP directory listing */
#dirmsg {
    font-family: courier, monospace;
    color: black;
    font-size: 10pt;
}
#dirlisting {
    margin-left: 2%;
    margin-right: 2%;
}
#dirlisting tr.entry td.icon,td.filename,td.size,td.date {
    border-bottom: groove;
}
#dirlisting td.size {
    width: 50px;
    text-align: right;
    padding-right: 5px;
}

/* horizontal lines */
hr {
	margin: 0;
}

/* page displayed footer area */
#footer {
	font-size: 9px;
	padding-left: 10px;
}


body
:lang(fa) { direction: rtl; font-size: 100%; font-family: Tahoma, Roya, sans-serif; float: right; }
:lang(he) { direction: rtl; }
 --></style>
</head><body id=ERR_DNS_FAIL>
<div id="titles">
<h1>ERROR</h1>
<h2>The requested URL could not be retrieved</h2>
</div>
<hr>

<div id="content">
<p>The following error was encountered while trying to retrieve the URL: <a href="http://phobos:40002/">http://phobos:40002/</a></p>

<blockquote id="error">
<p><b>Unable to determine IP address from host name <q>phobos</q></b></p>
</blockquote>

<p>The DNS server returned:</p>
<blockquote id="data">
<pre>Name Error: The domain name does not exist.</pre>
</blockquote>

<p>This means that the cache was not able to resolve the hostname presented in the URL. Check if the address is correct.</p>

<p>Your cache administrator is <a href="mailto:root?subject=CacheErrorInfo%20-%20ERR_DNS_FAIL&amp;body=CacheHost%3A%20proxybd.ethz.ch%0D%0AErrPage%3A%20ERR_DNS_FAIL%0D%0AErr%3A%20%5Bnone%5D%0D%0ADNS%20ErrMsg%3A%20Name%20Error%3A%20The%20domain%20name%20does%20not%20exist.%0D%0ATimeStamp%3A%20Fri,%2004%20Jul%202025%2010%3A51%3A24%20GMT%0D%0A%0D%0AClientIP%3A%20129.132.174.233%0D%0A%0D%0AHTTP%20Request%3A%0D%0AGET%20%2F%20HTTP%2F1.1%0AUser-Agent%3A%20curl%2F8.5.0%0D%0AAccept%3A%20*%2F*%0D%0AProxy-Connection%3A%20Keep-Alive%0D%0AHost%3A%20phobos%3A40002%0D%0A%0D%0A%0D%0A">root</a>.</p>
<br>
</div>

<hr>
<div id="footer">
<p>Generated Fri, 04 Jul 2025 10:51:24 GMT by proxybd.ethz.ch (squid/4.15)</p>
<!-- ERR_DNS_FAIL -->
</div>
</body></html>
* Closing connection

Thanks @Tamino for posting the outputs, which suggest a number of possible approaches, depending on how you intend to configure and operate CryoSPARC. Please can you provide additional information:

  1. Do you intend to run CryoSPARC in single workstation, connected workers or cluster mode?
  2. What are the outputs of these commands on the CryoSPARC master node?
    hostname -f
    host phobos
    grep HOSTNAME /home/cemkguest/software/cryosparc/cryosparc_master/config.sh
    /home/cemkguest/software/cryosparc/cryosparc_master/bin/cryosparcm call env | grep -i proxy
    

Dear @wtempel, thanks for the quick reply!

I am planning on using it only as a single workstation. It should only run on the local machine, with no additional workers or connected to the cluster (Only ssh connection to the machine for remote work)

Happily I provide you with the output of the commands:

cemkguest@phobos:~$ hostname -f
host phobos
grep HOSTNAME /home/cemkguest/software/cryosparc/cryosparc_master/config.sh
/home/cemkguest/software/cryosparc/cryosparc_master/bin/cryosparcm call env | grep -i proxy


phobos
phobos has address 129.132.174.233
phobos has IPv6 address fe80::d0b9:fd47:e64b:d9db
export CRYOSPARC_MASTER_HOSTNAME="phobos"
https_proxy=http://proxy.ethz.ch:3128/
http_proxy=http://proxy.ethz.ch:3128/

Thanks @Tamino. The topic Cryosparc not starting looks similar to your situation. Assuming that the --standalone installation was disrupted just after the installation of the cryosparc_master/ package, you may consider

  1. adding the line
    export NO_PROXY="${CRYOSPARC_MASTER_HOSTNAME},localhost,127.0.0.1"
    
    to the file
    /home/cemkguest/software/cryosparc/cryosparc_master/config.sh
    
    somewhere below the definition of CRYOSPARC_MASTER_HOSTNAME
  2. restarting CryoSPARC.
    The following steps would have been performed automatically in case of a smooth
    --standalone installation.
  3. creating the first user
  4. installing the cryosparc_worker/ package
  5. adding the line
    export NO_PROXY="phobos,localhost,127.0.0.1"
    
    to the file
    cryosparc_worker/config.sh
    
    (In case the master hostname changes from phobos in the future, phobos needs to be changed accordingly in the NO_PROXY definition.)
  6. connecting the worker component with a command like
    cryosparc_worker/bin/cryosparcw connect --master phobos --worker phobos --port 99999 --ssdpath /path/to/cache
    
    where the --port parameter needs to be changed from the nonsensical 99999 value to the CRYOSPARC_BASE_PORT value inside
    /home/cemkguest/software/cryosparc/cryosparc_master/config.sh
    
    and the --ssdpath parameter points to the actual path to be used for particle caching (or is replaced altogether with --nossd)

Thank you very much for your extensive reply!

Adding the suggested line to the /cryosparc_master/config.sh solved the issue, I can now open cryosparc on the web browser.

Unfortunately, the installation must have gone wrong somewhere still. I do not have any cryosparc_worker/config.sh fine, even after re-running:

tar -xf cryosparc_worker.tar.gz cryosparc_worker

What do you reccomend me to do?

Thank you very much,
all the best,
Tamino

@Tamino Have you run the
cryosparc_worker/install.sh command with the appropriate options?

Sorry for the late reply @wtempel

I managed to get cryosparc running! The only issue that came up, was to manually edit the
license id in the worker config.sh to match the one in the master config.sh

Thank you so much for all your help!

1 Like