Problems with 3.1 upgrade

Dear cryoSPARC team,
I tried upgrading to the newest version of cryosparc on a gpu machine where former version 2.15.0 ran without issues. Below is the output of the update command. Since then, nothing seems to work, using cryosparcm always result in the error “Warning: conda environment not found; this indicates that a cryoSPARC installation is either incomplete or in progress”

Here full error :

cryosparcm update
CryoSPARC current version v2.15.0
          update starting on mar. 09 févr. 2021 19:17:02 CET

No version specified - updating to latest version.

=============================
Updating to version v3.1.0.
=============================
CryoSPARC is running.
Stopping cryosparc
command_proxy: stopped
command_vis: stopped
webapp: stopped
command_core: stopped
database: stopped
Shut down
  Downloading master update...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  779M  100  779M    0     0  12.1M      0  0:01:03  0:01:03 --:--:-- 9995k
  Downloading worker update...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 2914M  100 2914M    0     0  11.2M      0  0:04:18  0:04:18 --:--:-- 13.5M
  Done.

 Update will now be applied to the master installation,
 followed by worker installations on other nodes.

  Deleting old files...
  Extracting...
  Done.
  Updating dependencies...
Could not find conda environment: cryosparc_master_env
You can list all discoverable environments with `conda info --envs`.

Warning: Could not activate conda environment; this indicates that a cryoSPARC installation is either incomplete or in progress
===================================================
Warning: cryoSPARC v2 dependencies and cryosparc2_worker.tar.gz detected.
Performing full v3 worker update in 10 seconds (to cancel, press Ctrl + C)
...
 
===================================================
Installing latest master dependencies.
===================================================
 
  Checking dependencies...
  Dependencies for python have changed - reinstalling...
  ------------------------------------------------------------------------
  Installing anaconda python...
  ------------------------------------------------------------------------
PREFIX=/data/4TB/cryosparc/CryoSPARC/software/cryosparc/cryosparc2_master/deps/anaconda
Unpacking payload ...
Traceback (most recent call last):
  File "entry_point.py", line 59, in <module>
  File "concurrent/futures/process.py", line 555, in __init__
  File "concurrent/futures/process.py", line 165, in __init__
  File "multiprocessing/queues.py", line 42, in __init__
  File "multiprocessing/context.py", line 68, in Lock
  File "multiprocessing/synchronize.py", line 162, in __init__
  File "multiprocessing/synchronize.py", line 57, in __init__
PermissionError: [Errno 13] Permission denied
[890968] Failed to execute script entry_point
  ------------------------------------------------------------------------
    Done.
    anaconda python installation successful.
  ------------------------------------------------------------------------
  Extracting all conda packages...
  ------------------------------------------------------------------------
........................................................................../data/4TB/cryosparc/CryoSPARC/software/cryosparc/cryosparc2_master/deps_bundle_hashes/python/install_python.sh: line 18: /data/4TB/cryosparc/CryoSPARC/software/cryosparc/cryosparc2_master/deps/anaconda/etc/profile.d/conda.sh: No such file or directory
/data/4TB/cryosparc/CryoSPARC/software/cryosparc/cryosparc2_master/deps_bundle_hashes/python/install_python.sh: line 19: conda: command not found
/data/4TB/cryosparc/CryoSPARC/software/cryosparc/cryosparc2_master/deps_bundle_hashes/python/install_python.sh: line 20: conda: command not found
/data/4TB/cryosparc/CryoSPARC/software/cryosparc/cryosparc2_master/deps_bundle_hashes/python/install_python.sh: line 21: conda: command not found

  ------------------------------------------------------------------------
    Done.
    conda packages installation successful.
  ------------------------------------------------------------------------
  Main dependency installation completed. Continuing...
  ------------------------------------------------------------------------
  Completed.
  Currently checking hash for mongodb
  Dependencies for mongodb have changed - reinstalling...
  ------------------------------------------------------------------------
  mongodb 3.4.10 installation successful.
  ------------------------------------------------------------------------
  Completed.
  Completed dependency check.
Warning: conda environment not found; this indicates that a cryoSPARC installation is either incomplete or in progress
 
===================================================
Successfully updated master to version v3.1.0.
===================================================
 
Warning: conda environment not found; this indicates that a cryoSPARC installation is either incomplete or in progress
Starting cryoSPARC System master process..
/data/4TB/cryosparc/CryoSPARC/software/cryosparc/cryosparc2_master/bin/cryosparcm: line 662: supervisorctl: command not found
CryoSPARC is already running.
If you would like to restart, use cryosparcm restart
Warning: conda environment not found; this indicates that a cryoSPARC installation is either incomplete or in progress
/data/4TB/cryosparc/CryoSPARC/software/cryosparc/cryosparc2_master/bin/cryosparcm: line 153: supervisorctl: command not found
CryoSPARC is running.
Stopping cryoSPARC
/data/4TB/cryosparc/CryoSPARC/software/cryosparc/cryosparc2_master/bin/cryosparcm: line 786: supervisorctl: command not found

Thanks in advance for help
Ambroise

If someone has any hint of how to progress, re-do the upgrade / installation, or any other advice that would be so great. Thank you in advance,
Ambroise

Hi @adesfosses, it looks like the dependencies failed to install due to a permission issue. Please try completing the update with the following instructions (for each step, substitute /path/to/ to the file path of the cryoSPARC installation on your system)

  1. Via command line, navigate to the cryosparc2_master directory, e.g.

    cd /path/to/cryosparc2_master
    
  2. Force-reinstall the dependencies:

    bin/cryosparcm forcedeps
    

    If you see any errors, send me the resulting outputs

  3. Copy or upload the cryosparc2_worker.tar.gz worker update into the cryoSPARC worker module directory, e.g.,

    cp cryosparc2_worker.tar.gz /path/to/cryosparc2_worker
    
  4. Navigate to the cryosparc2_worker directory via command line, e.g.,

    cd /path/to/cryosparc2_worker
    
  5. Complete the worker update

    bin/cryosparcw update
    
  6. Start cryoSPARC as you normally would

    cryosparcm start
    

Send me any errors you see. Also check out this post for some advice on updates from v2 to v3: cryoSPARC v2 → v3 Update Issues

If you still encounter issues, I suggest backing up your database, deleting your cryosparc2_master and cryosparc2_worker folders, and doing a fresh installation via instructions here: https://guide.cryosparc.com/setup-configuration-and-management/how-to-download-install-and-configure/downloading-and-installing-cryosparc (use the same --dbpath flag as when you first installed cryoSPARC, e.g., $HOME/cryosparc/cryosparc2_database)

1 Like

Hi, thank you very much for your reply. I followed your steps, except that for step 2 I had to use sudo (is that normal ?). However in step 4 I had following error:

 Skipping wheel build for pycuda, due to binaries being disabled for it.
Installing collected packages: scikit-cuda, pycuda
    Running setup.py install for pycuda ... error
    ERROR: Command errored out with exit status 1:
     command: /data/4TB/cryosparc/CryoSPARC/software/cryosparc/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/bin/python3.7 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-dpjhievo/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-dpjhievo/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-9wpdhviq/install-record.txt --single-version-externally-managed --compile --install-headers /data/4TB/cryosparc/CryoSPARC/software/cryosparc/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/include/python3.7m/pycuda
         cwd: /tmp/pip-req-build-dpjhievo/
    Complete output (106 lines):
    ***************************************************************
    *** WARNING: nvcc not in path.
    *** May need to set CUDA_INC_DIR for installation to succeed.
    ***************************************************************
    *************************************************************
    *** I have detected that you have not run configure.py.
    *************************************************************
    *** Additionally, no global config files were found.
    *** I will go ahead with the default configuration.
    *** In all likelihood, this will not work out.
    ***
    *** See README_SETUP.txt for more information.
    ***
    *** If the build does fail, just re-run configure.py with the
    *** correct arguments, and then retry. Good luck!
    *************************************************************
    *** HIT Ctrl-C NOW IF THIS IS NOT WHAT YOU WANT
    *************************************************************
    Continuing in 1 seconds...    
    /data/4TB/cryosparc/CryoSPARC/software/cryosparc/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/distutils/dist.py:274: UserWarning: Unknown distribution option: 'test_requires'
      warnings.warn(msg)
    running install
    running build
    running build_py
    creating build
    creating build/lib.linux-x86_64-3.7
    creating build/lib.linux-x86_64-3.7/pycuda
    copying pycuda/compiler.py -> build/lib.linux-x86_64-3.7/pycuda
    copying pycuda/curandom.py -> build/lib.linux-x86_64-3.7/pycuda
    copying pycuda/tools.py -> build/lib.linux-x86_64-3.7/pycuda
    copying pycuda/cumath.py -> build/lib.linux-x86_64-3.7/pycuda
    copying pycuda/__init__.py -> build/lib.linux-x86_64-3.7/pycuda
    copying pycuda/characterize.py -> build/lib.linux-x86_64-3.7/pycuda
    copying pycuda/_mymako.py -> build/lib.linux-x86_64-3.7/pycuda
    copying pycuda/elementwise.py -> build/lib.linux-x86_64-3.7/pycuda
    copying pycuda/debug.py -> build/lib.linux-x86_64-3.7/pycuda
    copying pycuda/scan.py -> build/lib.linux-x86_64-3.7/pycuda
    copying pycuda/gpuarray.py -> build/lib.linux-x86_64-3.7/pycuda
    copying pycuda/reduction.py -> build/lib.linux-x86_64-3.7/pycuda
    copying pycuda/_cluda.py -> build/lib.linux-x86_64-3.7/pycuda
    copying pycuda/autoinit.py -> build/lib.linux-x86_64-3.7/pycuda
    copying pycuda/driver.py -> build/lib.linux-x86_64-3.7/pycuda
    creating build/lib.linux-x86_64-3.7/pycuda/gl
    copying pycuda/gl/__init__.py -> build/lib.linux-x86_64-3.7/pycuda/gl
    copying pycuda/gl/autoinit.py -> build/lib.linux-x86_64-3.7/pycuda/gl
    creating build/lib.linux-x86_64-3.7/pycuda/sparse
    copying pycuda/sparse/cg.py -> build/lib.linux-x86_64-3.7/pycuda/sparse
    copying pycuda/sparse/operator.py -> build/lib.linux-x86_64-3.7/pycuda/sparse
    copying pycuda/sparse/packeted.py -> build/lib.linux-x86_64-3.7/pycuda/sparse
    copying pycuda/sparse/inner.py -> build/lib.linux-x86_64-3.7/pycuda/sparse
    copying pycuda/sparse/coordinate.py -> build/lib.linux-x86_64-3.7/pycuda/sparse
    copying pycuda/sparse/__init__.py -> build/lib.linux-x86_64-3.7/pycuda/sparse
    copying pycuda/sparse/pkt_build.py -> build/lib.linux-x86_64-3.7/pycuda/sparse
    creating build/lib.linux-x86_64-3.7/pycuda/compyte
    copying pycuda/compyte/__init__.py -> build/lib.linux-x86_64-3.7/pycuda/compyte
    copying pycuda/compyte/array.py -> build/lib.linux-x86_64-3.7/pycuda/compyte
    copying pycuda/compyte/dtypes.py -> build/lib.linux-x86_64-3.7/pycuda/compyte
    running egg_info
    writing pycuda.egg-info/PKG-INFO
    writing dependency_links to pycuda.egg-info/dependency_links.txt
    writing requirements to pycuda.egg-info/requires.txt
    writing top-level names to pycuda.egg-info/top_level.txt
    reading manifest file 'pycuda.egg-info/SOURCES.txt'
    reading manifest template 'MANIFEST.in'
    warning: no files found matching 'doc/source/_static/*.css'
    warning: no files found matching 'doc/source/_templates/*.html'
    warning: no files found matching '*.cpp' under directory 'bpl-subset/bpl_subset/boost'
    warning: no files found matching '*.html' under directory 'bpl-subset/bpl_subset/boost'
    warning: no files found matching '*.inl' under directory 'bpl-subset/bpl_subset/boost'
    warning: no files found matching '*.txt' under directory 'bpl-subset/bpl_subset/boost'
    warning: no files found matching '*.h' under directory 'bpl-subset/bpl_subset/libs'
    warning: no files found matching '*.ipp' under directory 'bpl-subset/bpl_subset/libs'
    warning: no files found matching '*.pl' under directory 'bpl-subset/bpl_subset/libs'
    writing manifest file 'pycuda.egg-info/SOURCES.txt'
    creating build/lib.linux-x86_64-3.7/pycuda/cuda
    copying pycuda/cuda/pycuda-complex-impl.hpp -> build/lib.linux-x86_64-3.7/pycuda/cuda
    copying pycuda/cuda/pycuda-complex.hpp -> build/lib.linux-x86_64-3.7/pycuda/cuda
    copying pycuda/cuda/pycuda-helpers.hpp -> build/lib.linux-x86_64-3.7/pycuda/cuda
    copying pycuda/sparse/pkt_build_cython.pyx -> build/lib.linux-x86_64-3.7/pycuda/sparse
    running build_ext
    building '_driver' extension
    creating build/temp.linux-x86_64-3.7
    creating build/temp.linux-x86_64-3.7/src
    creating build/temp.linux-x86_64-3.7/src/cpp
    creating build/temp.linux-x86_64-3.7/src/wrapper
    creating build/temp.linux-x86_64-3.7/bpl-subset
    creating build/temp.linux-x86_64-3.7/bpl-subset/bpl_subset
    creating build/temp.linux-x86_64-3.7/bpl-subset/bpl_subset/libs
    creating build/temp.linux-x86_64-3.7/bpl-subset/bpl_subset/libs/python
    creating build/temp.linux-x86_64-3.7/bpl-subset/bpl_subset/libs/python/src
    creating build/temp.linux-x86_64-3.7/bpl-subset/bpl_subset/libs/python/src/converter
    creating build/temp.linux-x86_64-3.7/bpl-subset/bpl_subset/libs/python/src/object
    creating build/temp.linux-x86_64-3.7/bpl-subset/bpl_subset/libs/system
    creating build/temp.linux-x86_64-3.7/bpl-subset/bpl_subset/libs/system/src
    creating build/temp.linux-x86_64-3.7/bpl-subset/bpl_subset/libs/smart_ptr
    creating build/temp.linux-x86_64-3.7/bpl-subset/bpl_subset/libs/smart_ptr/src
    creating build/temp.linux-x86_64-3.7/bpl-subset/bpl_subset/libs/thread
    creating build/temp.linux-x86_64-3.7/bpl-subset/bpl_subset/libs/thread/src
    creating build/temp.linux-x86_64-3.7/bpl-subset/bpl_subset/libs/thread/src/pthread
    gcc -pthread -B /data/4TB/cryosparc/CryoSPARC/software/cryosparc/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/compiler_compat -Wl,--sysroot=/ -Wsign-compare -fwrapv -Wall -O3 -DNDEBUG -fPIC -DBOOST_ALL_NO_LIB=1 -DBOOST_THREAD_BUILD_DLL=1 -DBOOST_MULTI_INDEX_DISABLE_SERIALIZATION=1 -DBOOST_PYTHON_SOURCE=1 -Dboost=pycudaboost -DBOOST_THREAD_DONT_USE_CHRONO=1 -DPYGPU_PACKAGE=pycuda -DPYGPU_PYCUDA=1 -DHAVE_CURAND=1 -Isrc/cpp -Ibpl-subset/bpl_subset -I/data/4TB/cryosparc/CryoSPARC/software/cryosparc/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/numpy/core/include -I/data/4TB/cryosparc/CryoSPARC/software/cryosparc/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/include/python3.7m -c src/cpp/cuda.cpp -o build/temp.linux-x86_64-3.7/src/cpp/cuda.o
    In file included from src/cpp/cuda.cpp:4:
    src/cpp/cuda.hpp:14:10: fatal error: cuda.h: No such file or directory
       14 | #include <cuda.h>
          |          ^~~~~~~~
    compilation terminated.
    error: command 'gcc' failed with exit status 1
    ----------------------------------------
ERROR: Command errored out with exit status 1: /data/4TB/cryosparc/CryoSPARC/software/cryosparc/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/bin/python3.7 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-dpjhievo/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-dpjhievo/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-9wpdhviq/install-record.txt --single-version-externally-managed --compile --install-headers /data/4TB/cryosparc/CryoSPARC/software/cryosparc/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/include/python3.7m/pycuda Check the logs for full command output.
  ------------------------------------------------------------------------
    Done.
    pip packages installation successful.

Given the error, we thought this might be cuda related, and we tried to fix that problem. However when I wanted to retry the worker update I got this message:

 bin/cryosparcw update
 cryosparc_worker.tar.gz file is missing. 

That seems to be because during the first update this file is removed. Any hint how to continue ?
Thanks much
Ambroise

Look at this guide to help you with the update:

https://guide.cryosparc.com/setup-configuration-and-management/software-updates#manual-cluster-updates

1 Like

@adesfosses,

If you’re updating cryoSPARC with the same UNIX user account that runs cryoSPARC and was used to install it, then you should not need to use sudo. Permission issues like these sometimes arise from a changed or incorrect umask setting - I recommend setting it to 022 in the cryoSPARC user’s ~/.bashrc

Also make sure cryoSPARC isn’t running when you run forcedeps.

Here’s a few commands to try and fix the permission issues (run from the installation directory where the cryosparc_master and/or cryosparc_worker folders are located:

chown -R $(whoami) cryosparc2_master
chmod -R 755 cryosparc2_master

chown -R $(whoami) cryosparc2_workerad
chmod -R 755 cryosparc2_worker

Finally, to resolve the CUDA issues, make sure the CUDA libraries are included in compile path environment variables like so (substituting the correct CUDA path for your system):

export CUDA_PATH="/usr/local/cuda"
export CUDA_INC_DIR="$CUDA_PATH/include"
export C_INCLUDE_PATH="$CUDA_INC_DIR"
export CPLUS_INCLUDE_PATH="$CUDA_INC_DIR"
export PATH=$CUDA_PATH/bin:$PATH

Then you can try the dependencies again with bin/cryosparcw forcedeps

You’re getting the file is missing error because the worker update already completed for v3, it’s just the dependencies that may not have finished installed.

If forcedeps didn’t work and you want to force the full bin/cryosparcw update again: Look for a cryosparc2_worker.tar.gz file in that directory. Rename it to cryosparc_worker.tar.gz (remove the 2). Then try update again.

Hope that helps.

1 Like

@nfrasser
Thank you for your reply. Even though I can now start cryosparc and access to the login page, I now have the error “User not found” , although using usual username and pwd. We’ve had to update the system recently, maybe this is causing all the problems. Maybe the easiest is to do a full new installation, using a new user account (the current one is from a student who left) ? We just hope that databases and projects will be safe and accessible. What procedure would you recommend ?
Thanks again for your help
Ambroise

You may be getting the “User not found error” because the database path changed somehow.

Look inside the cryosparc_master/config.sh file for the the export CRYOSPARC_DB_PATH="/path/to/cryosparc_database. Edit this line to make sure the “/path/to/cryosparc_database” is pointing to the same path on disk as when you first installed cryoSPARC with the --dbpath argument.

Finally restart cryoSPARC.

If this doesn’t work, check the database to make sure there the projects you expect are there.

Enter this command:

cryosparcm mongo

In the resulting prompt, enter these two commands:

db.users.find()
db.projects.find()

Then exit the prompt with Ctrl + C

If neither command outputs anything, this database does not contain your project and user data, so you’ll have to restore it from a backup. Here are instructions on how to do that: https://guide.cryosparc.com/setup-configuration-and-management/management-and-monitoring/cryosparcm#cryosparcm-backup

If you choose to reinstall cryoSPARC from scratch, stop the current running version with cryosparcm stop, navigate to a new location on disk, and follow the usual installation instructions: https://guide.cryosparc.com/setup-configuration-and-management/how-to-download-install-and-configure/downloading-and-installing-cryosparc

Thank you for the explanation. The CRYOSPARC_DB_PATH pointed to the right place. I have to say that there is now a /home/cryosparc folder that is a symlink to /data/4TB/cryosparc/ which is where we initially did the whole installation. I don’t know if this could have messed things up.
When we changed our linux distribution and re-installed it from scratch, we created again a /data/4TB/cryosparc folder and copied there the previous content of it that we had backed up elsewhere. After that, cryosparc ran perfectly fine and problems only appeared when I tried to upgrade to 3.1 following the instructions.
About the output of cryosparcm mongo : neither command outputs anything…so it looks like the databases are somehow empty.
I do not have a backup though, and several ongoing projects, among which some urgent ones…

If we would reinstall from scratch, is there a chance to retrieve the projects without having a backup file (e.g. by pointing to the folders where the projects P?? are) ?

I am really sorry to spam you with questions, I must admit that my computer knowledge is much more limited than image processing :slight_smile:

All the best
Ambroise

Just adding that I have a copy of the entire cryosparc folders (including databases, I believe) from an earlier time point, and if I could go back to that it would be already great. Does that sound possible ?
Thanks, Ambroise