Worker connect does not work during installation?

When installing standalone, I get a log like this and cannot complete the installation.

Autodetecting available GPUs...
Traceback (most recent call last):
  File "bin/connect.py", line 221, in <module>
    gpu_devidxs = check_gpus()
  File "bin/connect.py", line 91, in check_gpus
    num_devs = print_gpu_list()
  File "bin/connect.py", line 23, in print_gpu_list
    import pycuda.driver as cudrv
  File "/home/user/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/driver.py", line 65, in <module>
    from pycuda._driver import *  # noqa
ImportError: /home/user/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/_driver.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZSt28__throw_bad_array_new_lengthv

Welcome to the forum @aiueo1234.
What’s the output of the following set of commands (run on the worker)?
lscpu && free -g && uname -a && nvidia-smi

I am facing the same issue. Please advise on how to proceed.

My output for the commands you asked is attached

Hi @Adwaith99, @aiueo1234,

Can you also report what version of cryoSPARC you have installed? You can find it by running cryosparcm status.

This is the output of cryosparcm status: v3.3.1

@Adwaith99 @aiueo1234
You may try the following:

  • install cuda-11.2; ensure installation completes without error
  • repeat the worker installation, pointing --cudapath to the version 11.2 installation

Do you still get the same error?

I am still getting the same error.

@Adwaith99 Given my lack of experience with Arch Linux (inferred from your uname output), I can only suggest to look out for missing prerequisites, prerequisites at the wrong version (as a web search for _ZSt28__throw_bad_array_new_lengthv suggests), or errors during prerequisite (such as cuda) install.
Ubuntu-LTS may provide a smoother cryoSPARC installation experience, in case a switch of operating systems is feasible for you.
A word of caution: the reported VRAM size is below posted minimum requirements; a typically-sized cryoEM problem may not be supported by this configuration.

1 Like

Thank you for your suggestions. I’ll look into those options. Also, I have access to a workstation where I was able to successfully install cryoSPARC. This was just a try on my laptop. So, no issues. Thanks again.

I’ve been using cryoSPARC on Arch Linux for a while and never had this issue until installing it on a new box (running Manjaro, which is Arch based) today. I realise this topic is a few weeks old now, but as I stumbled onto this issue installing cryoSPARC, and have a workaround, I thought I’d sign up and post it for future reference.

In my so far limited testing this does not appear to break anything in cryoSPARC, although I have not yet tested exhaustively…

The issue, as the Python traceback indicates, is with PyCUDA, although it actually appears to be because the latest CUDA available on Arch is expecting a newer libstdc++ than the Anaconda install for cryoSPARC provides.

The workaround was fairly simple. I do not know if it will “stick” between cryoSPARC updates, but I have just quickly run through import, patch correction, CTF estimation, picking and 2D classification of a small in-house dataset to check it wasn’t going to immediately panic. I’ll be setting up a longer run overnight.

Anyway.

Install cryoSPARC until error occurs during worker connection. Open new terminal, navigate to:

$cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib

Where $cryosparc_worker is wherever you installed the cryoSPARC worker, then run:

mv libstdc++.so.6.0.28 libstdc++.so.6.0.28.backup

ln -s /usr/lib64/libstdc++.so.6.0.29 libstdc++.so.6.0.28

Open another terminal, navigate to:

$cryosparc_worker

And run:

./bin/cryosparcw gpulist

This should output a list of GPUs; if the libstdc++ symlink is bad (or missing) it will throw the same PyCUDA error. After confirming it’s OK, run:

./bin/cryosparcw connect --worker [workerHostname] --master [masterHostname] --[anyOtherFlagsYouNeed]

Check the cryoSPARC install page for what other flags can be used. This should output the final configuration table, listing hostname, GPUs, resource slots, etc., and cryoSPARC should run jobs successfully.

I’m not a fan of playing with symlinking of system libraries (at least at the system level) but as the cryoSPARC Anaconda environment is self-contained (and doesn’t get called except by cryoSPARC) it shouldn’t affect anything else on the system. It will probably break if you get a system libstdc++ update, or need reapplying with an updated symlink. If a system libstdc++ update does break it, it will probably manifest as cryoSPARC refusing to get past the “License is valid.” notification at the start of any run.