Worker connect does not work during installation?

When installing standalone, I get a log like this and cannot complete the installation.

Autodetecting available GPUs...
Traceback (most recent call last):
  File "bin/", line 221, in <module>
    gpu_devidxs = check_gpus()
  File "bin/", line 91, in check_gpus
    num_devs = print_gpu_list()
  File "bin/", line 23, in print_gpu_list
    import pycuda.driver as cudrv
  File "/home/user/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/", line 65, in <module>
    from pycuda._driver import *  # noqa
ImportError: /home/user/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/ undefined symbol: _ZSt28__throw_bad_array_new_lengthv

Welcome to the forum @aiueo1234.
What’s the output of the following set of commands (run on the worker)?
lscpu && free -g && uname -a && nvidia-smi

I am facing the same issue. Please advise on how to proceed.

My output for the commands you asked is attached

Hi @Adwaith99, @aiueo1234,

Can you also report what version of cryoSPARC you have installed? You can find it by running cryosparcm status.

This is the output of cryosparcm status: v3.3.1

@Adwaith99 @aiueo1234
You may try the following:

  • install cuda-11.2; ensure installation completes without error
  • repeat the worker installation, pointing --cudapath to the version 11.2 installation

Do you still get the same error?

I am still getting the same error.

@Adwaith99 Given my lack of experience with Arch Linux (inferred from your uname output), I can only suggest to look out for missing prerequisites, prerequisites at the wrong version (as a web search for _ZSt28__throw_bad_array_new_lengthv suggests), or errors during prerequisite (such as cuda) install.
Ubuntu-LTS may provide a smoother cryoSPARC installation experience, in case a switch of operating systems is feasible for you.
A word of caution: the reported VRAM size is below posted minimum requirements; a typically-sized cryoEM problem may not be supported by this configuration.

1 Like

Thank you for your suggestions. I’ll look into those options. Also, I have access to a workstation where I was able to successfully install cryoSPARC. This was just a try on my laptop. So, no issues. Thanks again.

I’ve been using cryoSPARC on Arch Linux for a while and never had this issue until installing it on a new box (running Manjaro, which is Arch based) today. I realise this topic is a few weeks old now, but as I stumbled onto this issue installing cryoSPARC, and have a workaround, I thought I’d sign up and post it for future reference.

In my so far limited testing this does not appear to break anything in cryoSPARC, although I have not yet tested exhaustively…

The issue, as the Python traceback indicates, is with PyCUDA, although it actually appears to be because the latest CUDA available on Arch is expecting a newer libstdc++ than the Anaconda install for cryoSPARC provides.

The workaround was fairly simple. I do not know if it will “stick” between cryoSPARC updates, but I have just quickly run through import, patch correction, CTF estimation, picking and 2D classification of a small in-house dataset to check it wasn’t going to immediately panic. I’ll be setting up a longer run overnight.


Install cryoSPARC until error occurs during worker connection. Open new terminal, navigate to:


Where $cryosparc_worker is wherever you installed the cryoSPARC worker, then run:


ln -s /usr/lib64/

Open another terminal, navigate to:


And run:

./bin/cryosparcw gpulist

This should output a list of GPUs; if the libstdc++ symlink is bad (or missing) it will throw the same PyCUDA error. After confirming it’s OK, run:

./bin/cryosparcw connect --worker [workerHostname] --master [masterHostname] --[anyOtherFlagsYouNeed]

Check the cryoSPARC install page for what other flags can be used. This should output the final configuration table, listing hostname, GPUs, resource slots, etc., and cryoSPARC should run jobs successfully.

I’m not a fan of playing with symlinking of system libraries (at least at the system level) but as the cryoSPARC Anaconda environment is self-contained (and doesn’t get called except by cryoSPARC) it shouldn’t affect anything else on the system. It will probably break if you get a system libstdc++ update, or need reapplying with an updated symlink. If a system libstdc++ update does break it, it will probably manifest as cryoSPARC refusing to get past the “License is valid.” notification at the start of any run.


@rbs_sci Thank you for this solution and taking the time to document both the issue and the fix on your end, it appears to have worked for us as well!

I wanted to add some information about our system as it is quite different from @Adwaith99 system, and hopefully this may help troubleshooting in the future. We are running a linux box with 2 RTX 3090s on a Sage 299x board with an intel processor, and a fresh install of Ubuntu 22.04 LTS after moving away from CentOS 7 (with the 3.10 kernel; we are hoping the move will fix other issues as well). At the moment we only have Cryosparc and Cuda 11.2 on our system, and this was the first (and now I think only) major error we got during installation.

Notably, we are not running an Arch based system, though perhaps this is tied to our use of 22.04 LTS instead of 20.04 LTS. If anyone has further input, or theories on why we are also seeing this error, thank you for any help you can provide.

Thanks for the solution. We just ran into the same problem with a new ubuntu mate 22.04.

For Ubuntu 22 it seems the operating system’s default libstdc++ is in /lib/x86_64-linux-gnu :

ln -s /lib/x86_64-linux-gnu/ 

For any linux system, the location of the OS’s default libstdc++ lib file can be found by:

ldconfig -p |grep stdc++

In addition, if the* files in the cryosparc_worker_env/lib directory are simply deleted/renamed, the programs will automatically fall back to the system’s libstdc++, which will work just fine: