Could not get database status in standalone installation

hey everyone,

here is the error that i am getting, can anyone help? This is a fresh standalone installation:

************ CRYOSPARC MASTER INSTALLATION COMPLETE **************

CryoSPARC System master has now been installed.
Add bin directory to your ~/.bashrc ?

  1. Yes
  2. No
    #? 1
    cryosparc bin directory already in PATH, skipped.

************ CRYOSPARC SYSTEM: STANDALONE INSTALLER **************

The installer will now download, install and connect the worker
package on this machine. The worker package will allow CryoSPARC
to run compute jobs.


Starting CryoSPARC…

Starting cryoSPARC System master process…
CryoSPARC is not already running.
configuring database
Warning: Could not get database status (attempt 1/3)
Warning: Could not get database status (attempt 2/3)
Warning: Could not get database status (attempt 3/3)
Traceback (most recent call last):
File “”, line 1, in
File “/spshared/apps/cryosparc24/cryosparc_master/cryosparc_compute/database_management.py”, line 49, in configure_mongo
initialize_replica_set()
File “/spshared/apps/cryosparc24/cryosparc_master/cryosparc_compute/database_management.py”, line 88, in initialize_replica_set
admin_db = try_get_pymongo_db(mongo_client)
File “/spshared/apps/cryosparc24/cryosparc_master/cryosparc_compute/database_management.py”, line 251, in try_get_pymongo_db
admin_db.command(({‘serverStatus’: 1}))
File “/spshared/apps/cryosparc24/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/database.py”, line 827, in command
with self.__client._socket_for_reads(read_preference, session) as (sock_info, secondary_ok):
File “/spshared/apps/cryosparc24/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/contextlib.py”, line 113, in enter
return next(self.gen)
File “/spshared/apps/cryosparc24/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/mongo_client.py”, line 1478, in _socket_for_reads
server = self._select_server(read_preference, session)
File “/spshared/apps/cryosparc24/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/mongo_client.py”, line 1436, in _select_server
server = topology.select_server(server_selector)
File “/spshared/apps/cryosparc24/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/topology.py”, line 250, in select_server
return random.choice(self.select_servers(selector, server_selection_timeout, address))
File “/spshared/apps/cryosparc24/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/topology.py”, line 211, in select_servers
server_descriptions = self._select_servers_loop(selector, server_timeout, address)
File “/spshared/apps/cryosparc24/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.8/site-packages/pymongo/topology.py”, line 226, in _select_servers_loop
raise ServerSelectionTimeoutError(
pymongo.errors.ServerSelectionTimeoutError: localhost:39001: [Errno 111] Connection refused, Timeout: 20.0s, Topology Description: <TopologyDescription id: 65cce1172cfd750b89b2bc64, topology_type: Single, servers: [<ServerDescription (‘localhost’, 39001) server_type: Unknown, rtt: None, error=AutoReconnect(‘localhost:39001: [Errno 111] Connection refused’)>]>
[2024-02-14T09:50:53-06:00] Error configuring database. Most recent database log lines:
2024-02-14T09:49:40.302-0600 I CONTROL [initandlisten] git version: d352e6a4764659e0d0350ce77279de3c1f243e5c
2024-02-14T09:49:40.302-0600 I CONTROL [initandlisten] allocator: tcmalloc
2024-02-14T09:49:40.302-0600 I CONTROL [initandlisten] modules: none
2024-02-14T09:49:40.302-0600 I CONTROL [initandlisten] build environment:
2024-02-14T09:49:40.302-0600 I CONTROL [initandlisten] distarch: x86_64
2024-02-14T09:49:40.302-0600 I CONTROL [initandlisten] target_arch: x86_64
2024-02-14T09:49:40.302-0600 I CONTROL [initandlisten] options: { net: { port: 39001 }, replication: { oplogSizeMB: 64, replSet: “meteor” }, storage: { dbPath: “/spshared/apps/cryosparc24/cryosparc_database” } }
2024-02-14T09:49:40.302-0600 I STORAGE [initandlisten] exception in initAndListen: IllegalOperation: Attempted to create a lock file on a read-only directory: /spshared/apps/cryosparc24/cryosparc_database, terminating
2024-02-14T09:49:40.302-0600 I CONTROL [initandlisten] now exiting
2024-02-14T09:49:40.302-0600 I CONTROL [initandlisten] shutting down with code:100

Did you ensure that CryoSPARC commands are always run by the same non-privileged Linux user under whose account CryoSPARC was installed? What are the outputs of the commands

ls -al /spshared/apps/cryosparc24/cryosparc_master/
ls -al /spshared/apps/cryosparc24/cryosparc_database/

Hey wtempel,
thank you for helping. yes - cryosparc was purged from the entire machine and I am trying to put it with a new license with a single user account in the machine. here are the outputs: I don’t see anything in the database

ls -al /spshared/apps/cryosparc24/cryosparc_master/

drwxrwxr-x. 14 spuser spuser 4096 Feb 14 14:59 .
drwxrwxrwx. 5 root root 160 Feb 14 14:32 …
drwxr-xr-x. 2 spuser spuser 91 Dec 5 09:24 bin
-rwxr-xr-x. 1 spuser spuser 5458 Dec 5 09:24 check_install_deps.sh
-rw-------. 1 spuser spuser 16384 Feb 14 14:59 collection-0–533474586060662766.wt
-rwxrwxrwx. 1 root root 652 Feb 14 15:00 config.sh
drwxrwxr-x. 5 spuser spuser 71 Feb 12 14:29 cryosparc_app
drwxr-xr-x. 10 spuser spuser 4096 Dec 5 09:41 cryosparc_command
drwxr-xr-x. 14 spuser spuser 4096 Feb 14 14:22 cryosparc_compute
drwxr-xr-x. 4 spuser spuser 4096 Dec 5 09:29 cryosparc_tools
drwxrwxr-x. 4 spuser spuser 38 Feb 12 16:29 deps
drwxr-xr-x. 4 spuser spuser 36 Dec 5 09:42 deps_bundle
drwxr-xr-x. 4 spuser spuser 36 Dec 5 09:43 deps_bundle_hashes
drwxrwxr-x. 4 spuser spuser 36 Feb 12 16:29 deps_hashes
-rw-------. 1 spuser spuser 16384 Feb 14 14:59 index-0–2914106327927119649.wt
-rw-rw-r–. 1 spuser spuser 31 Feb 12 14:59 --initial_email
-rw-r–r–. 1 spuser spuser 3128 Dec 5 09:24 install.py
-rwxr-xr-x. 1 spuser spuser 13544 Dec 5 09:24 install.sh
lrwxrwxrwx. 1 spuser spuser 43 Dec 5 09:24 lib → deps/anaconda/envs/cryosparc_master_env/lib
-rw-------. 1 spuser spuser 32768 Feb 14 14:59 _mdb_catalog.wt
-rw-------. 1 spuser spuser 0 Feb 14 14:59 mongod.lock
-rw-r–r–. 1 spuser spuser 15164 Dec 5 09:24 patch.py
drwxrwxr-x. 2 spuser spuser 34 Feb 14 12:34 pycache
drwxrwxr-x. 2 spuser spuser 72 Feb 14 14:35 run
-rw-r–r–. 1 spuser spuser 4072 Dec 5 09:24 setup_client_ipython.py
-rw-------. 1 spuser spuser 32768 Feb 14 14:59 sizeStorer.wt
-rw-rw-r–. 1 spuser spuser 42 Feb 12 15:01 --ssdpath
-rw-------. 1 spuser spuser 114 Feb 14 14:58 storage.bson
-rw-r–r–. 1 spuser spuser 4149 Dec 5 09:24 supervisord.conf
drwxr-xr-x. 2 spuser spuser 101 Dec 5 09:24 systemd
-rw-r–r–. 1 spuser spuser 7 Dec 5 09:43 version
-rw-------. 1 spuser spuser 45 Feb 14 14:58 WiredTiger
-rw-------. 1 spuser spuser 4096 Feb 14 14:59 WiredTigerLAS.wt
-rw-------. 1 spuser spuser 21 Feb 14 14:58 WiredTiger.lock
-rw-------. 1 spuser spuser 1118 Feb 14 14:59 WiredTiger.turtle
-rw-------. 1 spuser spuser 45056 Feb 14 14:59 WiredTiger.wt

ls -al /spshared/apps/cryosparc24/cryosparc_database/

total 0
drwxr-xr-x. 2 root root 6 Feb 12 16:18 .
drwxrwxrwx. 5 root root 160 Feb 14 14:32 …

i solved the issue with a new database creation. Thank you @wtempel

1 Like

hey @wtempel

i ran into another problem connecting to the standalone master/worker combo frmo the master to worker. Could you please give me how to navigate this error.

CRYOSPARC CONNECT --------------------------------------------

Attempting to register worker localhost to command localhost:39002
Connecting as unix user spuser
Will register using ssh string: spuser@localhost
If this is incorrect, you should re-run this command with the flag --sshstr

/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_tools/cryosparc/command.py:134: UserWarning: *** CommandClient: (http://localhost:39002/api) URL Error [Errno 111] Connection refused, attempt 1 of 3. Retrying in 30 seconds
system = self._get_callable(“system.describe”)()
/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_tools/cryosparc/command.py:134: UserWarning: *** CommandClient: (http://localhost:39002/api) URL Error [Errno 111] Connection refused, attempt 2 of 3. Retrying in 30 seconds
system = self._get_callable(“system.describe”)()
Traceback (most recent call last):
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_tools/cryosparc/command.py”, line 104, in func
with make_json_request(self, “/api”, data=data, _stacklevel=4) as request:
File “/spshared/apps/cryosparc24/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/contextlib.py”, line 113, in enter
return next(self.gen)
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_tools/cryosparc/command.py”, line 225, in make_request
raise CommandError(error_reason, url=url, code=code, data=resdata)
cryosparc_tools.cryosparc.errors.CommandError: *** (http://localhost:39002/api, code 500) URL Error [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “bin/connect.py”, line 78, in
cli = client.CommandClient(host=master_hostname, port=command_core_port, service=“command_core”)
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_compute/client.py”, line 38, in init
super().init(service, host, port, url, timeout, headers, cls=NumpyEncoder)
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_tools/cryosparc/command.py”, line 96, in init
self._reload() # attempt connection immediately to gather methods
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_tools/cryosparc/command.py”, line 134, in _reload
system = self._get_callable(“system.describe”)()
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_tools/cryosparc/command.py”, line 107, in func
raise CommandError(
cryosparc_tools.cryosparc.errors.CommandError: *** (http://localhost:39002, code 500) Encounted error from JSONRPC function “system.describe” with params ()

Hey @wtempel,
Thank you for the help in advance!
I am having a similar issue, that i have not been able to fix myself. After a standalone installation, i get hit by the following error in the cryosparc web interface when I try to import movies:

Unable to queue P1 J4: Scheduler error: no schedule lanes found. Please connect a worker using ‘cryosparcw connect’.

So i tried this following command:

./…/cryosparc_worker/bin/cryosparcw connect --master localhost --port 39000 --worker localhost --nossd

I get the following error, not sure where my problem diverged from SAmhitha’s:

CRYOSPARC CONNECT --------------------------------------------

Attempting to register worker localhost to command localhost:39002
Connecting as unix user spuser
Will register using ssh string: spuser@localhost
If this is incorrect, you should re-run this command with the flag --sshstr

/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_tools/cryosparc/command.py:134: UserWarning: *** CommandClient: (http://localhost:39002/api) URL Error [Errno 111] Connection refused, attempt 1 of 3. Retrying in 30 seconds
system = self._get_callable(“system.describe”)()
/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_tools/cryosparc/command.py:134: UserWarning: *** CommandClient: (http://localhost:39002/api) URL Error [Errno 111] Connection refused, attempt 2 of 3. Retrying in 30 seconds
system = self._get_callable(“system.describe”)()
Traceback (most recent call last):
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_tools/cryosparc/command.py”, line 104, in func
with make_json_request(self, “/api”, data=data, _stacklevel=4) as request:
File “/spshared/apps/cryosparc24/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/contextlib.py”, line 113, in enter
return next(self.gen)
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_tools/cryosparc/command.py”, line 225, in make_request
raise CommandError(error_reason, url=url, code=code, data=resdata)
cryosparc_tools.cryosparc.errors.CommandError: *** (http://localhost:39002/api, code 500) URL Error [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “bin/connect.py”, line 78, in
cli = client.CommandClient(host=master_hostname, port=command_core_port, service=“command_core”)
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_compute/client.py”, line 38, in init
super().init(service, host, port, url, timeout, headers, cls=NumpyEncoder)
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_tools/cryosparc/command.py”, line 96, in init
self._reload() # attempt connection immediately to gather methods
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_tools/cryosparc/command.py”, line 134, in _reload
system = self._get_callable(“system.describe”)()
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_tools/cryosparc/command.py”, line 107, in func
raise CommandError(
cryosparc_tools.cryosparc.errors.CommandError: *** (http://localhost:39002, code 500) Encounted error from JSONRPC function “system.describe” with params ()

when i run
cryosparcm status

i get the following:
Current cryoSPARC version: v4.4.1

CryoSPARC is not running.


global config variables:
export CRYOSPARC_LICENSE_ID=“redacted”
export CRYOSPARC_MASTER_HOSTNAME=“spgpu”
export CRYOSPARC_DB_PATH=“/spshared/apps/cryosparc24/cryosparc_database”
export CRYOSPARC_BASE_PORT=39000
export CRYOSPARC_DB_CONNECTION_TIMEOUT_MS=20000
export CRYOSPARC_INSECURE=false
export CRYOSPARC_DB_ENABLE_AUTH=true
export CRYOSPARC_CLUSTER_JOB_MONITOR_INTERVAL=10
export CRYOSPARC_CLUSTER_JOB_MONITOR_MAX_RETRIES=1000000
export CRYOSPARC_PROJECT_DIR_PREFIX=‘CS-’
export CRYOSPARC_DEVELOP=false
export CRYOSPARC_CLICK_WRAP=true

i also tried to look at the specific ports, if they were closed for some reason and got this:

nc -zv spgpu 39000-39010

Ncat: Version 7.70 ( Ncat - Netcat for the 21st Century )
Ncat: Connection refused.

also the netstat with grep didn’t return anything
netstat -tuplen | grep :39000
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)

thank you for your help

Are you trying

  • to connect a separate, additional worker to an existing CryoSPARC combined master/worker "workstation
  • or complete an aborted --standalone installation by connecting the worker component to the master component on the same host?

What are the outputs of these commands

cryosparcm status | grep -v LICENSE
hostname -f
host $(hostname -f)

@rakroy Is this post essentially a duplicate of Could not get database status in standalone installation - #5 by rakroy ? Shall we continue discussion under that other topic?

Based on information you posted under other topics, the resolution may be

  1. Ensure CryoSPARC is running
    cryosparcm start
  2. connect the worker component with this command
    /spshared/apps/cryosparc24/cryosparc_worker/bin/cryosparcw connect \
    --master 127.0.0.1 --worker spgpu --port 39000 --nossd
    

Does this help?

i tried

cryosparcm start

it starts fine

then i tried
./…/cryosparc_worker/bin/cryosparcw connect --master 127.0.0.1 --worker spgpu --port 39000 --nossd

i got the following error:
Attempting to register worker spgpu to command 127.0.0.1:39002
Connecting as unix user spuser
Will register using ssh string: spuser@spgpu
If this is incorrect, you should re-run this command with the flag --sshstr

Traceback (most recent call last):
File “bin/connect.py”, line 79, in
assert cli.test_connection(), “Could not connect to cryoSPARC master at %s:%d” % (master_hostname, command_core_port)
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_tools/cryosparc/command.py”, line 121, in func
raise CommandError(
cryosparc_tools.cryosparc.errors.CommandError: *** (http://127.0.0.1:39002, code 403) Encountered ServerError from JSONRPC function “test_connection” with params ():
ServerError: Authentication failed - License-ID mismatch.
Please ensure cryosparc_master/config.sh and cryoparc_worker/config.sh have the same CRYOSPARC_LICENSE_ID entry
or CRYOSPARC_LICENSE_ID is set correctly in the current environment.
See CryoSPARC Architecture and System Requirements - CryoSPARC Guide for more details.

hey @wtempel ,

i tried

cryosparcm start

it starts fine

then i tried
./…/cryosparc_worker/bin/cryosparcw connect --master 127.0.0.1 --worker spgpu --port 39000 --nossd

i got the following error:

Attempting to register worker spgpu to command 127.0.0.1:39002
Connecting as unix user spuser
Will register using ssh string: spuser@spgpu
If this is incorrect, you should re-run this command with the flag --sshstr

Traceback (most recent call last):
File “bin/connect.py”, line 79, in
assert cli.test_connection(), “Could not connect to cryoSPARC master at %s:%d” % (master_hostname, command_core_port)
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_tools/cryosparc/command.py”, line 121, in func
raise CommandError(
cryosparc_tools.cryosparc.errors.CommandError: *** (http://127.0.0.1:39002, code 403) Encountered ServerError from JSONRPC function “test_connection” with params ():
ServerError: Authentication failed - License-ID mismatch.
Please ensure cryosparc_master/config.sh and cryoparc_worker/config.sh have the same CRYOSPARC_LICENSE_ID entry
or CRYOSPARC_LICENSE_ID is set correctly in the current environment.
See CryoSPARC Architecture and System Requirements - CryoSPARC Guide for more details.

Along with the previous error, I also checked that my ~/.bashrc has the right license ID.

to connect a separate, additional worker to an existing CryoSPARC combined master/worker "workstation

-no. this is the standalone machine, where the default worker lane isn’t coming up.

or complete an aborted --standalone installation by connecting the worker component to the master component on the same host?

-yes, it is the same host, the installation seemingly finished correctly and then when i tried to import movies, i saw the error.

output of the commands are given below:

spuser@spgpu cryosparc_master]$ cryosparcm status | grep -v LICENSE
----------------------------------------------------------------------------
CryoSPARC System master node installed at
/spshared/apps/cryosparc24/cryosparc_master
Current cryoSPARC version: v4.4.1
----------------------------------------------------------------------------

CryoSPARC process status:

app                              RUNNING   pid 69121, uptime 0:16:09
app_api                          RUNNING   pid 69215, uptime 0:16:08
app_api_dev                      STOPPED   Not started
command_core                     RUNNING   pid 69045, uptime 0:16:20
command_rtp                      RUNNING   pid 69079, uptime 0:16:13
command_vis                      RUNNING   pid 69074, uptime 0:16:14
database                         RUNNING   pid 68925, uptime 0:16:23

----------------------------------------------------------------------------
License is valid
----------------------------------------------------------------------------

global config variables:
export CRYOSPARC_MASTER_HOSTNAME="spgpu"
export CRYOSPARC_DB_PATH="/spshared/apps/cryosparc24/cryosparc_database"
export CRYOSPARC_BASE_PORT=39000
export CRYOSPARC_DB_CONNECTION_TIMEOUT_MS=20000
export CRYOSPARC_INSECURE=false
export CRYOSPARC_DB_ENABLE_AUTH=true
export CRYOSPARC_CLUSTER_JOB_MONITOR_INTERVAL=10
export CRYOSPARC_CLUSTER_JOB_MONITOR_MAX_RETRIES=1000000
export CRYOSPARC_PROJECT_DIR_PREFIX='CS-'
export CRYOSPARC_DEVELOP=false
export CRYOSPARC_CLICK_WRAP=true

[spuser@spgpu cryosparc_master]$ hostname -f
spgpu

[spuser@spgpu cryosparc_master]$ host $(hostname -f)
Host spgpu not found: 3(NXDOMAIN)

Hey @wtempel,

Did you have any luck finding answers for how I can make this work? please let me know:

this is the current error that I have:

CRYOSPARC CONNECT --------------------------------------------

Attempting to register worker spgpu to command 127.0.0.1:39002
Connecting as unix user spuser
Will register using ssh string: spuser@spgpu
If this is incorrect, you should re-run this command with the flag --sshstr

Connected to master.

Current connected workers:

Worker will be registered with 64 CPUs.
Traceback (most recent call last):
File “bin/connect.py”, line 232, in
if os.environ[‘CRYOSPARC_USE_GPU’] == “true” and not args.nogpu:
File “/spshared/apps/cryosparc24/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/os.py”, line 675, in getitem
raise KeyError(key) from None
KeyError: ‘CRYOSPARC_USE_GPU’

Please post:

  • the full command that triggered KeyError: ‘CRYOSPARC_USE_GPU’
  • output of the commands
    grep -v LICENSE /spshared/apps/cryosparc24/cryosparc_worker/config.sh
    nvidia-smi
    

This name resolution failure may cause problems later on. Will this CryoSPARC instance remain standalone, or are their plans to add additional workers in the future?

Hey @wtempel ,

Here are the answers you asked. Thank you so much again.

This name resolution failure may cause problems later on. Will this CryoSPARC instance remain standalone, or are their plans to add additional workers in the future?

  • it will stay standalone.

the full command that triggered KeyError: ‘CRYOSPARC_USE_GPU’

-./../cryosparc_worker/bin/cryosparcw connect --master 127.0.0.1 --worker spgpu --port 39000 --nossd

output for

grep -v LICENSE /spshared/apps/cryosparc24/cryosparc_worker/config.sh
# Instance Configuration
export CRYOSPARC_MASTER_HOSTNAME="spgpu"
export CRYOSPARC_DB_PATH="/spshared/apps/cryosparc24/cryosparc_database"
export CRYOSPARC_BASE_PORT=39000
export CRYOSPARC_DB_CONNECTION_TIMEOUT_MS=20000

# Security
export CRYOSPARC_INSECURE=false
export CRYOSPARC_DB_ENABLE_AUTH=true

# Cluster Integration
export CRYOSPARC_CLUSTER_JOB_MONITOR_INTERVAL=10
export CRYOSPARC_CLUSTER_JOB_MONITOR_MAX_RETRIES=1000000

# Project Configuration
export CRYOSPARC_PROJECT_DIR_PREFIX='CS-'

# Development
export CRYOSPARC_DEVELOP=false

# Other
export CRYOSPARC_CLICK_WRAP=true

output for

nvidia-smi

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.89.02    Driver Version: 525.89.02    CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:31:00.0  On |                  N/A |
| 30%   46C    P8    34W / 350W |    970MiB / 24576MiB |     32%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce ...  Off  | 00000000:4B:00.0 Off |                  N/A |
| 30%   44C    P8    22W / 350W |     10MiB / 24576MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   2  NVIDIA GeForce ...  Off  | 00000000:B1:00.0 Off |                  N/A |
| 30%   33C    P8    21W / 350W |     10MiB / 24576MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   3  NVIDIA GeForce ...  Off  | 00000000:CA:00.0 Off |                  N/A |
| 30%   36C    P8    23W / 350W |     10MiB / 24576MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      7408      G   /usr/libexec/Xorg                  54MiB |
|    0   N/A  N/A      7730      G   /usr/bin/gnome-shell              319MiB |
|    0   N/A  N/A      8018      G   ...mviewer/tv_bin/TeamViewer       14MiB |
|    0   N/A  N/A     29878      G   /usr/libexec/Xorg                 262MiB |
|    0   N/A  N/A     30200      G   /usr/bin/gnome-shell               61MiB |
|    0   N/A  N/A     30229      G   ...mviewer/tv_bin/TeamViewer       15MiB |
|    0   N/A  N/A     31080      G   /usr/lib64/firefox/firefox        182MiB |
|    0   N/A  N/A     51720      G   ...332048342457996392,131072       54MiB |
|    1   N/A  N/A      7408      G   /usr/libexec/Xorg                   4MiB |
|    1   N/A  N/A     29878      G   /usr/libexec/Xorg                   4MiB |
|    2   N/A  N/A      7408      G   /usr/libexec/Xorg                   4MiB |
|    2   N/A  N/A     29878      G   /usr/libexec/Xorg                   4MiB |
|    3   N/A  N/A      7408      G   /usr/libexec/Xorg                   4MiB |
|    3   N/A  N/A     29878      G   /usr/libexec/Xorg                   4MiB |
+-----------------------------------------------------------------------------+

This output resembles the config.sh file inside a cryosparc_master/ directory.
cryosparc_worker/config.sh should look similar to:

export CRYOSPARC_LICENSE_ID="your-secret-license-id"
export CRYOSPARC_USE_GPU=true

Some error may have occurred during the cryosparc_worker/ installation whose consequences may reach beyond an incorrect cryosparc_worker/config.sh file. It is possible to redo the cryosparc_worker/ installation, using a freshly unpacked cryosparc_worker/ directory without having to repeat the cryosparc_master/ installation.

You may be able to resolve the name resolution failure by adding a line

127.2.2.2 spgpu

to the /etc/hosts file of the computer.

1 Like

Thanks @wtempel

I did a fresh unpack and then it did connect and I am able to start cryosparc browser etc.

i tried to import movies and i am getting hit by this error. Is this error from the installation side?
Feels to me like this is because of some duplicate .tiff files. Please let me know what you think.

Traceback (most recent call last):
File “cryosparc_master/cryosparc_compute/run.py”, line 95, in cryosparc_master.cryosparc_compute.run.main
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_compute/jobs/imports/run.py”, line 858, in run_import_movies_or_micrographs
assert len(set(abs_paths_to_import)) == len(abs_paths_to_import), “All import paths are not unique - does your wildcard expression include the gain reference .mrc by accident?”
AssertionError: All import paths are not unique - does your wildcard expression include the gain reference .mrc by accident?

Thank you for your advice again.

Never mind. I figured it was a problem with the gain_reference.tiff. had to move it to a different folder.

For the config.sh under master I have had some issues with the host name.

It is most stable if you use an IP address from opening terminal ($bash) => $ifconfig

There should be an IP address (e.g. 192.10.51.123) you can put in as the host name. When everything starts you can put in a web browser 192.10.51.123:39000 (assuming you set port 39000) or localhost:39000.

in ~/install-dir/cryosparc_master/config.sh
export CRYOSPARC_MASTER_HOSTNAME=“192.10.51.123”

The troublesome issues were when the workstation was on a network that kept changing its IP address until we got a static IP for the MAC address in the NIC. We found https://guide.cryosparc.com/setup-configuration-and-management/management-and-monitoring/cryosparcm about CRYOSPARC_FORCE_HOSTNAME to help on some workstations.

In your case the 127.0.0.1 suggests it is on wifi or something.