Cannot connect worker to master

Hi, I used the single workstation installation command to download the software on our school’s pod cluster. When it tried to connect the worker node to master node, it had the error listed below:

Connected to master.

Current connected workers:

Worker will be registered with 32 CPUs.
Autodetecting available GPUs…
Traceback (most recent call last):
File “/home/frankz8695/cryosparcuser/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/llvmlite/binding/ffi.py”, line 180, in
lib = ctypes.CDLL(str(_lib_handle.enter()))
File “/home/frankz8695/cryosparcuser/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/ctypes/init.py”, line 373, in init
self._handle = _dlopen(self._name, mode)
OSError: /opt/ohpc/pub/compiler/gcc/5.4.0/lib64/libstdc++.so.6: version `GLIBCXX_3.4.30’ not found (required by /home/frankz8695/cryosparcuser/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/llvmlite/binding/…/…/…/…/libLLVM-14.so)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “bin/connect.py”, line 233, in
gpu_devidxs = check_gpus()
File “bin/connect.py”, line 96, in check_gpus
correct_driver_version = get_correct_min_driver_version(driver_version)
File “/home/frankz8695/cryosparcuser/cryosparc/cryosparc_worker/cryosparc_compute/get_gpu_info.py”, line 40, in get_correct_min_driver_version
from cryosparc_compute.gpu.driver import CUDA_TOOLKIT_MIN_DRIVER_VERSIONS, get_toolkit_version
File “/home/frankz8695/cryosparcuser/cryosparc/cryosparc_worker/cryosparc_compute/gpu/driver.py”, line 13, in
import numba.cuda
File “/home/frankz8695/cryosparcuser/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/numba/init.py”, line 69, in
from numba.core import config
File “/home/frankz8695/cryosparcuser/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/numba/core/config.py”, line 16, in
import llvmlite.binding as ll
File “/home/frankz8695/cryosparcuser/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/llvmlite/binding/init.py”, line 4, in
from .dylib import *
File “/home/frankz8695/cryosparcuser/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/llvmlite/binding/dylib.py”, line 3, in
from llvmlite.binding import ffi
File “/home/frankz8695/cryosparcuser/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/llvmlite/binding/ffi.py”, line 187, in
raise OSError(msg)
OSError: Could not find/load shared object file: libllvmlite.so
Error was: /opt/ohpc/pub/compiler/gcc/5.4.0/lib64/libstdc++.so.6: version `GLIBCXX_3.4.30’ not found (required by /home/frankz8695/cryosparcuser/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/llvmlite/binding/…/…/…/…/libLLVM-14.so)

When I tried to use the cryosparcw connect command to do it again, the same error appeared again.
I felt that this problem has to do with the pod cluster. Can somebody offer a solution so that I could tell the IT people what to do.

Welcome to the forum @FrankZ8695.

What are the outputs of these commands on the computer/host where you try to install CryoSPARC

uname -a
nvidia-smi
env | grep PATH

Was the installation command run as part of a slurm job?

Hi,
Here is the output I got:

[frankz8695@pod-gpu bin]$ env | grep PATH

MANPATH=/opt/ohpc/pub/compiler/gcc/5.4.0/share/man:/opt/ohpc/pub/utils/autotools/share/man:/usr/local/share/man:/usr/share/man/overrides:/usr/share/man/en:/usr/share/man

LD_LIBRARY_PATH=/opt/ohpc/pub/compiler/gcc/5.4.0/lib64

PATH=/home/jay/Softwares/gcc-12.1.0/bin:/home/jay/Softwares/openACC/Linux_x86_64/23.5/compilers/bin:/opt/ohpc/pub/compiler/gcc/5.4.0/bin:/opt/ohpc/pub/utils/prun/1.2:/opt/ohpc/pub/utils/autotools/bin:/opt/ohpc/pub/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/frankz8695/.local/bin:/home/frankz8695/bin

MODULEPATH=/opt/ohpc/pub/moduledeps/gnu:/sw/modulefiles:/sw/intel/oneapi/modulefiles:/opt/ohpc/pub/modulefiles

_LMOD_REF_COUNTPATH=/opt/ohpc/pub/compiler/gcc/5.4.0/bin:1;/opt/ohpc/pub/utils/prun/1.2:1;/opt/ohpc/pub/utils/autotools/bin:1;/opt/ohpc/pub/bin:1;/usr/local/bin:1;/usr/bin:1;/usr/local/sbin:1;/usr/sbin:1

_LMOD_REF_COUNT_LD_LIBRARYPATH=/opt/ohpc/pub/compiler/gcc/5.4.0/lib64:1

__LMOD_REF_COUNT_MANPATH=/opt/ohpc/pub/compiler/gcc/5.4.0/share/man:1;/opt/ohpc/pub/utils/autotools/share/man:1;/usr/local/share/man:1;/usr/share/man/overrides:1;/usr/share/man/en:1;/usr/share/man:1


Thanks for posting this info @FrankZ8695
The current version of CryoSPARC requires nvidia driver version ≥ v520.
CryoSPARC manages its own PATH and LD_LIBRARY_PATH. I do not think any of the custom components are needed, and may well interfere with CryoSPARC, as

suggests.
May I suggest installing and running CryoSPARC in an environment

  • with an initially minimal PATH, like
    PATH=/usr/bin:/usr/sbin
    
  • with an initially empty LD_LIBRARY_PATH
  • without “modules” (as in MODULEPATH) loaded