Help! I seem to have broken cryosparc by moving the cyrosparc_user home directory to a new location and then moving it back again!

We ran out of space in the partition the cryosparc_user home directory is located in, so I moved the home dirs to a new partition, not realizing that on install Anaconda hardwires the absolute path into various executables (why they’re not using relative paths like everyone else is beyond me). In order for the users to continue to work while I sort this out, I moved just the cryosparc_user folder back to the original location, but it still doesn’t work! It seems to be having troubles reconnecting to it’s own database:

cryosparc_user@kraken:~/software/cryosparc2_master/bin$ ./cryosparcm stop
CryoSPARC is running.
Stopping cryosparc
Shut down
cryosparc_user@kraken:~/software/cryosparc2_master/bin$ ./cryosparcm start
Starting cryoSPARC System master process..
CryoSPARC is not already running.
database: ERROR (spawn error)
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/local/home/cryosparc_user/software/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/mongo_client.py", line 1149, in             database_names
    "listDatabases")["databases"]]
  File "/local/home/cryosparc_user/software/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/database.py", line 491, in command
    with client._socket_for_reads(read_preference) as (sock_info, slave_ok):      File "/local/home/cryosparc_user/software/cryosparc2_master/deps/anaconda/lib/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/local/home/cryosparc_user/software/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/mongo_client.py", line 859, in              _socket_for_reads
    with self._get_socket(read_preference) as sock_info:
  File "/local/home/cryosparc_user/software/cryosparc2_master/deps/anaconda/lib/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/local/home/cryosparc_user/software/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/mongo_client.py", line 823, in _get_socket
    server = self._get_topology().select_server(selector)
  File "/local/home/cryosparc_user/software/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/topology.py", line 214, in select_server
    address))
  File "/local/home/cryosparc_user/software/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/pymongo/topology.py", line 189, in select_servers           self._error_message(selector))
pymongo.errors.ServerSelectionTimeoutError: localhost:39001: [Errno 111]    Connection refused

Any help with this would be appreciated. I can reinstall CryoSPARC from scratch, but we don’t want to lose our database.

Hi @pgoetz, I definitely feel your pain regarding moving around the Anaconda dependencies! Regarding the database issue, it looks like there may be another database process that is still running or has stopped but did not properly get cleaned up.

Here’s what I suggest

  1. Check for existing mongod processes. You may do this with terminal command:

    ps ax -U $(whoami) | grep mongod
    

    If you see any lines that contain mongod --dbpath ... kill each process with kill <pid>, replacing <pid> with the Process ID number of the beginning of each line. Re-run the ps command to confirm that all processes have successfully stopped.

  2. Navigate to the file-system location that contains that cryoSPARC database. You should see this location in the file cryosparc2_master/config.sh. Look for the line that starts with export CRYOSPARC_DB_PATH=

  3. If you see a file called mongod.lock in this directory, delete it

  4. Try starting cryoSPARC with cryosparcm start

Let me know how that goes,

Nick

PS: You can re-install the Python dependencies after you move the installation folder on disk: Navigate to the cryosparc2_master directory and run cryosparcm forcedeps. If you moved your cryosparc2_worker directory as well, navigate to it and run cryosparcw forcedeps.

Thanks for the helpful suggestions. So, I had already made sure there were no mongodb processes running and had also removed the lock file with no luck. Next, I tried this:

cd ~/cryosparc_user/software/cryosparc2_master
eval $(bin/cryosparcm env)
cd ../../cryosparc_database
mongod --dbpath ./ --repair

This resulted in this error message:

cryosparc_user@kraken:~/cryosparc_database$ mongod --dbpath ./ --repair
2020-11-11T13:31:13.760-0600 I CONTROL  [initandlisten] MongoDB starting : pid=29830 port=27017 dbpath=./ 64-bit host=kraken
2020-11-11T13:31:13.760-0600 I CONTROL  [initandlisten] db version v3.4.10-4-g67ee356c6b
2020-11-11T13:31:13.760-0600 I CONTROL  [initandlisten] git version: 67ee356c6be377cda547d16423daef5beb4e8377
2020-11-11T13:31:13.760-0600 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2p  14 Aug 2018
2020-11-11T13:31:13.760-0600 I CONTROL  [initandlisten] allocator: tcmalloc
2020-11-11T13:31:13.760-0600 I CONTROL  [initandlisten] modules: none
2020-11-11T13:31:13.760-0600 I CONTROL  [initandlisten] build environment:
2020-11-11T13:31:13.760-0600 I CONTROL  [initandlisten]     distarch: x86_64
2020-11-11T13:31:13.760-0600 I CONTROL  [initandlisten]     target_arch: x86_64
2020-11-11T13:31:13.760-0600 I CONTROL  [initandlisten] options: { repair: true, storage: { dbPath: "./" } }
2020-11-11T13:31:13.779-0600 I -        [initandlisten] Detected data files in ./ created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2020-11-11T13:31:13.779-0600 I STORAGE  [initandlisten]
2020-11-11T13:31:13.779-0600 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2020-11-11T13:31:13.779-0600 I STORAGE  [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
2020-11-11T13:31:13.779-0600 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=128410M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),                               file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),,log=(enabled=false),
2020-11-11T13:31:13.790-0600 E STORAGE  [initandlisten] WiredTiger error (-31802) [1605123073:790636][29830:0x7f0b3655bd40], file:WiredTiger.wt, connection: unable to read root page from file:WiredTiger.wt: WT_ERROR: non-specific WiredTiger error
2020-11-11T13:31:13.790-0600 E STORAGE  [initandlisten] WiredTiger error (0) [1605123073:790689][29830:0x7f0b3655bd40], file:WiredTiger.wt, connection: WiredTiger has failed to open its metadata
2020-11-11T13:31:13.790-0600 E STORAGE  [initandlisten] WiredTiger error (0) [1605123073:790696][29830:0x7f0b3655bd40], file:WiredTiger.wt, connection: This may be due to the database files being encrypted, being from an older version or due to corruption on disk
2020-11-11T13:31:13.790-0600 E STORAGE  [initandlisten] WiredTiger error (0) [1605123073:790703][29830:0x7f0b3655bd40], file:WiredTiger.wt, connection: You should confirm that you have opened the database with the correct options including all encryption and compression options
2020-11-11T13:31:13.790-0600 I -        [initandlisten] Assertion: 28595:-31802: WT_ERROR: non-specific WiredTiger error src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp 276
2020-11-11T13:31:13.791-0600 I STORAGE  [initandlisten] exception in initAndListen: 28595 -31802: WT_ERROR: non-specific WiredTiger error, terminating
2020-11-11T13:31:13.791-0600 I NETWORK  [initandlisten] shutdown: going to close listening sockets...
2020-11-11T13:31:13.791-0600 I NETWORK  [initandlisten] removing socket file: /tmp/mongodb-27017.sock
2020-11-11T13:31:13.791-0600 I NETWORK  [initandlisten] shutdown: going to flush diaglog...
2020-11-11T13:31:13.791-0600 I CONTROL  [initandlisten] now exiting
2020-11-11T13:31:13.791-0600 I CONTROL  [initandlisten] shutting down with code:100

I ended up restoring a system backup copy of the mongo DB folder. This also gave database corruption errors when I tried to run cryosparcm start but in this case the preceding DB recovery command worked. We lost a week’s worth of project data, but much better than having to recreate every user and re-import every project!