Cryosparc installation problem

Dear community, this is my first post and sorry if it had been posted already: didn’t get into it.

My name is Davide Fuzzati, i’m 56 y/o and i’m writing from Italy, glad to meet You all!

I’m trying to install Cryosparc on a Supermicro machine, 2 xeon gold 6426Y CPUs, and an nVidia A2 Tensor Core GPU.

homedir is an NFS mount (/home) and the software will be installed into another NFS mount (/opt/share/sw)

Installation runs just fine but at a point i get this error:

---------------------------------------------------------------
  CRYOSPARC CONNECT --------------------------------------------
---------------------------------------------------------------
  Attempting to register worker acnode03.e4red to command acnode03.e4red:61002
  Connecting as unix user odebei
  Will register using ssh string: odebei@acnode03.e4red
  If this is incorrect, you should re-run this command with the flag --sshstr <ssh string>
---------------------------------------------------------------
  Connected to master.
---------------------------------------------------------------
  Current connected workers:
---------------------------------------------------------------
  Worker will be registered with 128 CPUs.
  Autodetecting available GPUs...
Traceback (most recent call last):
  File "/opt/share/users/odebei/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/llvmlite/binding/ffi.py", line 180, in <module>
    lib = ctypes.CDLL(str(_lib_handle.__enter__()))
  File "/opt/share/users/odebei/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/ctypes/__init__.py", line 373, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by /opt/share/users/odebei/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/llvmlite/binding/../../../../libLLVM-14.so)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "bin/connect.py", line 233, in <module>
    gpu_devidxs = check_gpus()
  File "bin/connect.py", line 96, in check_gpus
    correct_driver_version = get_correct_min_driver_version(driver_version)
  File "/opt/share/users/odebei/cryosparc_worker/cryosparc_compute/get_gpu_info.py", line 40, in get_correct_min_driver_version
    from cryosparc_compute.gpu.driver import CUDA_TOOLKIT_MIN_DRIVER_VERSIONS, get_toolkit_version
  File "/opt/share/users/odebei/cryosparc_worker/cryosparc_compute/gpu/driver.py", line 13, in <module>
    import numba.cuda
  File "/opt/share/users/odebei/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/numba/__init__.py", line 69, in <module>
    from numba.core import config
  File "/opt/share/users/odebei/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/numba/core/config.py", line 16, in <module>
    import llvmlite.binding as ll
  File "/opt/share/users/odebei/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/llvmlite/binding/__init__.py", line 4, in <module>
    from .dylib import *
  File "/opt/share/users/odebei/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/llvmlite/binding/dylib.py", line 3, in <module>
    from llvmlite.binding import ffi
  File "/opt/share/users/odebei/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/llvmlite/binding/ffi.py", line 187, in <module>
    raise OSError(msg)
OSError: Could not find/load shared object file: libllvmlite.so
Error was: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by /opt/share/users/odebei/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/llvmlite/binding/../../../../libLLVM-14.so)
[odebei@acnode03 cryosparc_master]$

Can someone please help me to sort this thing out?

Thank You so very much in advance for your great courtesy and responsiveness.

Regards,
Davide Fuzzati

Welcome to the forum @fuztec25.

Please can you post the outputs of these commands on the machine acnode03:

hostname -f
nvidia-smi
uname -a
cat /etc/*release
env | grep PATH

Hello Wolfram, sorry for bugging, closing this ticket as it had been reported the machine described had been decommissioned last week.
Installing on a new node will report a new ticket if required.

1 Like

Good morning Wolfram and all people at support, will keep on this ticket…

OK, switched to a dedicated (this is totally mine!) brand new machine: a twin square one.

Same specs as before: Intel Xeon Gold 6426Y (x2) and nVidia A2: the only difference is the number of CPU (64) and RAM (512 GB)

Same error as before:

Worker will be registered with 64 CPUs.
Autodetecting available GPUs...
Traceback (most recent call last):
File "/opt/share/users/odebei/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/llvmlite/binding/ffi.py", line 180, in <module> lib = ctypes.CDLL(str(_lib_handle.__enter__()))
File "/opt/share/users/odebei/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/ctypes/__init__.py", line 373, in __init__self._handle = _dlopen(self._name, mode)
OSError: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by /opt/share/users/odebei/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/ages/llvmlite/binding/../../../../libLLVM-14.so)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "bin/connect.py", line 233, in <module>
    gpu_devidxs = check_gpus()
  File "bin/connect.py", line 96, in check_gpus
    correct_driver_version = get_correct_min_driver_version(driver_version)
  File "/opt/share/users/odebei/cryosparc_worker/cryosparc_compute/get_gpu_info.py", line 40, in get_correct_min_driver_version
    from cryosparc_compute.gpu.driver import CUDA_TOOLKIT_MIN_DRIVER_VERSIONS, get_toolkit_version
  File "/opt/share/users/odebei/cryosparc_worker/cryosparc_compute/gpu/driver.py", line 13, in <module>
    import numba.cuda
  File "/opt/share/users/odebei/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/numba/__init__.py", line 69, in <module>
    from numba.core import config
  File "/opt/share/users/odebei/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/numba/core/config.py", line 16, in <module>
    import llvmlite.binding as ll
  File "/opt/share/users/odebei/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/llvmlite/binding/__init__.py", line 4, in <module>
    from .dylib import *
  File "/opt/share/users/odebei/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/llvmlite/binding/dylib.py", line 3, in <module>
    from llvmlite.binding import ffi
  File "/opt/share/users/odebei/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/llvmlite/binding/ffi.py", line 187, in <module>
    raise OSError(msg)
OSError: Could not find/load shared object file: libllvmlite.so
 Error was: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by /opt/share/users/odebei/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lackages/llvmlite/binding/../../../../libLLVM-14.so)

And these are the answers to your requests:
hostname -f = icnode04.e4red
nvidia-smi = NVIDIA-SMI 515.105.01 Driver Version: 515.105.01 CUDA Version: 11.7
uname -a = Linux icnode04.e4red 4.18.0-477.10.1.el8_8.x86_64 #1 SMP Wed Apr 5 13:35:01 EDT 2023 x86_64 x86_64 x86_64 GNU/Linux

cat /etc/*release = NAME="Red Hat Enterprise Linux"
VERSION="8.8 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.8 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8::baseos"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.8
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.8"
Red Hat Enterprise Linux release 8.8 (Ootpa)
Red Hat Enterprise Linux release 8.8 (Ootpa)
env | grep PATH = LD_LIBRARY_PATH=/opt/share/sw/gcc-8.5.0/slurm-22.05.2/lib:/opt/share/libs/gcc-8.5.0/ucx-1.13.0/lib:/opt/share/libs/gcc-8.5.0/xpmem-2022.03.08/lib:/opt/share/libs/gcc-8.5.0/pmix-4.1.2/lib:/opt/share/libs/gcc-8.5.0/hwloc-2.8.0/lib:/opt/rocm-5.2.1/lib:/opt/rocm-5.2.1/lib64:/usr/lib:/usr/lib64:/opt/share/libs/nvidia/cuda-11.7/lib64:/opt/share/libs/nvidia/cuda-11.7/lib64/stubs
MODULES_RUN_QUARANTINE=LD_LIBRARY_PATH LD_PRELOAD
PATH_modshare=/usr/bin:1:/opt/rocm-5.2.1/bin:1:/opt/rocm-5.2.1/opencl/bin:1:/usr/share/Modules/bin:1:/usr/local/bin:1:/opt/share/libs/gcc-8.5.0/pmix-4.1.2/bin:1:/opt/rocm-5.2.1/llvm/bin:1:/home/odebei/.local/bin:1:/home/odebei/bin:1:/usr/sbin:1:/opt/share/libs/gcc-8.5.0/ucx-1.13.0/bin:1:/opt/share/sw/gcc-8.5.0/slurm-22.05.2/bin:1:/opt/share/libs/nvidia/cuda-11.7//bin:1:/opt/share/libs/gcc-8.5.0/hwloc-2.8.0/bin:1:/usr/local/sbin:1:/opt/share/libs/gcc-8.5.0/xpmem-2022.03.08/bin:1
ROCM_PATH=/opt/rocm-5.2.1
LD_LIBRARY_PATH_modshare=/opt/share/libs/gcc-8.5.0/hwloc-2.8.0/lib:1:/opt/share/libs/gcc-8.5.0/xpmem-2022.03.08/lib:1:/opt/rocm-5.2.1/lib:1:/usr/lib64:1:/opt/share/libs/gcc-8.5.0/ucx-1.13.0/lib:1:/opt/share/libs/gcc-8.5.0/pmix-4.1.2/lib:1:/opt/share/sw/gcc-8.5.0/slurm-22.05.2/lib:1:/opt/rocm-5.2.1/lib64:1:/usr/lib:1:/opt/share/libs/nvidia/cuda-11.7/lib64:1:/opt/share/libs/nvidia/cuda-11.7/lib64/stubs:1
CMAKE_PREFIX_PATH=/opt/rocm-5.2.1
CMAKE_PREFIX_PATH_modshare=/opt/rocm-5.2.1:1
PMIX_DSTORE_ESH_BASE_PATH=/var/spool/slurm/pmix.9134.0//pmix_dstor_ds12_3517
MANPATH_modshare=/opt/share/libs/gcc-8.5.0/xpmem-2022.03.08/share/man:1:/opt/rocm-5.2.1/share/man:1:/opt/share/libs/nvidia/cuda-11.7/share/man:1::1:/opt/share/libs/gcc-8.5.0/ucx-1.13.0/share/man:1:/opt/share/libs/gcc-8.5.0/pmix-4.1.2/share/man:1:/opt/share/sw/gcc-8.5.0/slurm-22.05.2/share/man:1:/opt/share/libs/gcc-8.5.0/hwloc-2.8.0/share/man:1
HIP_PATH=/opt/rocm-5.2.1/hip
MANPATH=/opt/share/sw/gcc-8.5.0/slurm-22.05.2/share/man:/opt/share/libs/gcc-8.5.0/ucx-1.13.0/share/man:/opt/share/libs/gcc-8.5.0/xpmem-2022.03.08/share/man:/opt/share/libs/gcc-8.5.0/pmix-4.1.2/share/man:/opt/share/libs/gcc-8.5.0/hwloc-2.8.0/share/man:/opt/rocm-5.2.1/share/man:/opt/share/libs/nvidia/cuda-11.7/share/man::
MODULEPATH=/etc/scl/modulefiles:/usr/share/Modules/modulefiles:/etc/modulefiles:/usr/share/modulefiles:/opt/share/modulefiles/comps:/opt/share/modulefiles/libs:/opt/share/modulefiles/sw
PMIX_DSTORE_21_BASE_PATH=/var/spool/slurm/pmix.9134.0//pmix_dstor_ds21_3517
MODULEPATH_modshare=/opt/share/modulefiles/comps:1:/opt/share/modulefiles/libs:1:/opt/share/modulefiles/sw:1:/usr/share/Modules/modulefiles:2:/etc/modulefiles:2:/usr/share/modulefiles:2
PATH=/opt/share/sw/gcc-8.5.0/slurm-22.05.2/bin:/opt/share/libs/gcc-8.5.0/ucx-1.13.0/bin:/opt/share/libs/gcc-8.5.0/xpmem-2022.03.08/bin:/opt/share/libs/gcc-8.5.0/pmix-4.1.2/bin:/opt/share/libs/gcc-8.5.0/hwloc-2.8.0/bin:/opt/rocm-5.2.1/bin:/opt/rocm-5.2.1/llvm/bin:/opt/rocm-5.2.1/opencl/bin:/opt/share/libs/nvidia/cuda-11.7//bin:/home/odebei/.local/bin:/home/odebei/bin:/usr/share/Modules/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin

Wolfram, thank You SO VERY MUCH IN ADVANCE for your great courtesy and responsiveness, regards,

Davide Fuzzati

You may need to upgrade the nvidia driver to version 520.61.05 or newer and reboot the machine.

may be a consequence of customizations in the environment variables. After the nvidia driver upgrade and reboot, please can you run the command

/opt/share/users/odebei/cryosparc_worker/bin/cryosparcw env | grep PATH

and post the output.
For example, it is possible that

LD_LIBRARY_PATH=/opt/share/sw/gcc-8.5.0/slurm-22.05.2/lib:/opt/share/libs/gcc-8.5.0/ucx-1.13.0/lib:/opt/share/libs/gcc-8.5.0/xpmem-2022.03.08/lib:/opt/share/libs/gcc-8.5.0/pmix-4.1.2/lib:/opt/share/libs/gcc-8.5.0/hwloc-2.8.0/lib:/opt/rocm-5.2.1/lib:/opt/rocm-5.2.1/lib64:/usr/lib:/usr/lib64:/opt/share/libs/nvidia/cuda-11.7/lib64:/opt/share/libs/nvidia/cuda-11.7/lib64/stubs

interferes with CryoSPARC by pointing to an older version of libstdc++.so.6 on your system that is incompatible with CryoSPARC. You can try adding a line

unset LD_LIBRARY_PATH

to the file

/opt/share/users/odebei/cryosparc_worker/config.sh

Does this help?
[Edited, information added]

Dear Wolfram, thanks for your prompt reply.
Just a quick question: when you write “update nVidia driver to v520.61.05” you mean to update the nVidia driver inside the CUDA toolkit or do you mean the video card driver?

Sorry for the extremely dumb question… My first on these systems…

Thank You so very much in advance for your courtesy and responsiveness.

Regards,
Davide.

Video card driver. :slight_smile:

Sorry Wolfram forgive me being not accurate: the machine i own has an nVidia A2 Tensor Core GPU. Are the drivers you mentioned right for the A2 board?

This should be the correct driver (most recent one, anyway…)

Thank You so very much, will test it immediately.

Good day Wolfram,
since i cannot modify the cluster infrastructure, i was wondering what is the adequate Cryosparc version that supports the CUDA v515.x drivers. Asked to the students and they’re ok to install not the latest release but an older one that might support the summoned driver.

May i please ask you to point me out to a link for Cryosparc that runs with the driver as above?

Thank you so very much in advance for your great courtesy and responsiveness.

With NVIDIA driver version v515, you may want to

but users will not be able to run 3DFlex jobs.