Problem using icli to change project directory

Hello,
I would like to change the location of a project directory and I am following the instructions listed here: https://discuss.cryosparc.com/t/change-the-output-directory-of-a-job-workspace/2381

However, when I run the cryosparcm icli command, I get the following errors:

> Python 2.7.15 |Anaconda, Inc.| (default, Oct 10 2018, 21:32:13)
> Type "copyright", "credits" or "license" for more information.
> 
> IPython 5.1.0 -- An enhanced Interactive Python.
> ?         -> Introduction and overview of IPython's features.
> %quickref -> Quick reference.
> help      -> Python's own help system.
> object?   -> Details about 'object', use 'object??' for extra details.
> *** client.py: command (http:// my.server.edu:39005/api) did not reply within timeout of 300 seconds, attempt 1 of
> *** client.py: command (http:// my.server.edu:39005/api) did not reply within timeout of 300 seconds, attempt 2 of
> *** client.py: command (http:// my.server.edu:39005/api) did not reply within timeout of 300 seconds, attempt 3 of
> ---------------------------------------------------------------------------
> ConnectionError                           Traceback (most recent call last)
> /opt/cryosparc/cryosparc2_master/setup_client_ipython.py in <module>()
>      18 rtp_port=int(os.environ['CRYOSPARC_COMMAND_RTP_PORT'])
>      19 cli = client.CommandClient(host=host, port=port)
> ---> 20 rtp = client.CommandClient(host=host, port=rtp_port)
>      21
>      22 from pymongo import MongoClient
> 
> /opt/cryosparc/cryosparc2_master/cryosparc2_compute/client.pyc in __init__(self, host, port, url, timeout)
>      31         self.url = "http://%s:%s%s" % (host, str(port), url)
>      32         self.timeout = timeout
> ---> 33         self._reload()
>      34     def _get_callable(self, key):
>      35         def func(*args, **kwargs):
> 
> /opt/cryosparc/cryosparc2_master/cryosparc2_compute/client.pyc in _reload(self)
>      59         return func
>      60     def _reload(self):
> ---> 61         system = self._get_callable('system.describe')()
>      62         self.endpoints = [p['name'] for p in system['procs']]
>      63         for key in self.endpoints:
> 
> /opt/cryosparc/cryosparc2_master/cryosparc2_compute/client.pyc in func(*args, **kwargs)
>      47             while not done:
>      48                 try:
> ---> 49                     r = requests.post(self.url, data = json.dumps(data, cls=NumpyEncoder), headers = header,
>      50                     res = r.json()
>      51                     done = True
> 
> /opt/cryosparc/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/requests/api.pyc in post(url, data, json,
>     114     """
>     115
> --> 116     return request('post', url, data=data, json=json, **kwargs)
>     117
>     118
> 
> /opt/cryosparc/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/requests/api.pyc in request(method, url, *
>      58     # cases, and look like a memory leak in others.
>      59     with sessions.Session() as session:
> ---> 60         return session.request(method=method, url=url, **kwargs)
>      61
>      62
> 
> /opt/cryosparc/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/requests/sessions.pyc in request(self, meth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
>     531         }
>     532         send_kwargs.update(settings)
> --> 533         resp = self.send(prep, **send_kwargs)
>     534
>     535         return resp
> 
> /opt/cryosparc/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/requests/sessions.pyc in send(self, reques
>     644
>     645         # Send the request
> --> 646         r = adapter.send(request, **kwargs)
>     647
>     648         # Total elapsed time of the request (approximately)
> 
> /opt/cryosparc/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/requests/adapters.pyc in send(self, reques
>     514                 raise SSLError(e, request=request)
>     515
> --> 516             raise ConnectionError(e, request=request)
>     517
>     518         except ClosedPoolError as e:
> 
> ConnectionError: HTTPConnectionPool(host='my.server.edu', port=39005): Max retries exceeded with url: /api (Causenection object at 0x7feb55985050>: Failed to establish a new connection: [Errno 111] Connection refused',))

If I run cryosparcm status, I get the following:

> ----------------------------------------------------------------------------
> CryoSPARC System master node installed at
> /opt/cryosparc/cryosparc2_master
> Current cryoSPARC version: v2.11.0
> ----------------------------------------------------------------------------
> 
> cryosparcm process status:
> 
> app                              STOPPED   Not started
> app_dev                          STOPPED   Not started
> command_core                     RUNNING   pid 32752, uptime 0:25:44
> command_proxy                    RUNNING   pid 32781, uptime 0:25:41
> command_rtp                      STOPPED   Not started
> command_vis                      STARTING
> database                         RUNNING   pid 32676, uptime 0:25:46
> watchdog_dev                     STOPPED   Not started
> webapp                           RUNNING   pid 32788, uptime 0:25:40
> webapp_dev                       STOPPED   Not started
> 
> ----------------------------------------------------------------------------
> 
> global config variables:
> 
> export CRYOSPARC_LICENSE_ID="xxxxxxxx"
> export CRYOSPARC_MASTER_HOSTNAME="my.server.edu"
> export CRYOSPARC_DB_PATH="/opt/cryosparc/cryosparc2_database"
> export CRYOSPARC_BASE_PORT=39000
> export CRYOSPARC_DEVELOP=false
> export CRYOSPARC_INSECURE=false

Any help would be appreciated. Thanks!

Hi @MattW,

This seems like an issue with your actual instance. Did you try restarting?

cryosparcm stop
#check if there are any zombie processes running 
ps -ax | grep “supervisord” #(kill only the process that is running from your cryosparc2 install)
ps -ax | grep “cryosparc2_command” #(kill all the matching processes related to your cryosparc2 instance)
ps -ax | grep “mongod” #(kill only the process running your cryosparc2 database)
#e.g.:
kill 82681
cryosparcm start

Thank you for the suggestions. I had already tried restarting and that didn’t help. I also checked if any zombie processes were running and there weren’t any.

Hi @MattW

Are you running CentOS by any chance? Do you know if the ports are accessible to the master instance?

Yes, I am using CentOS (v. 7.6.1810). I am not sure if all the necessary ports are accessible. I did not have to do anything special to get the other parts of cryosparc to work, except I did have to open port 39000 so I could use the web interface from a remote computer (however, the web interface worked fine from the local computer before doing this). I did try to open port 39005 in the same way using firewall-cmd --permanent --add-port=39005/tcp, but I still get the same error in icli and the command_rtp process is still shown as stopped.

Hi @MattW,

CryoSPARC uses ports 39000 to 39005. Can you open these ports as well?

I just opened them using the same firewall-cmd command and it did not have any effect. The reason I did not try that before is that when cryosparc is running and I run netstat to see open ports, it shows that ports 39000-39004 were open and being used.

Suse 13.2 (yes, we reinstalled the obsolete OS), cryosparc 2.11, cuda 9.1

the _vis process keeps restarting, looks like we are missing library file libcurand.so.10.0
which I find very confusing as we compiled against cuda 9.1, v10 is not even on the system and I’d have thought the linker would reject the build. I searched the entire disk structure and we have no libcurand.so.10.0.

I have cryosparc_path, cryosparc_ld_library_path and cryosparc_cda_path defined, as well as ld_library and ld_preload, all seemingly valid, don’t know why we are looking for a 10.0 file.

safe and effective to link a v9 file to so.10.0? Install and extend the ld_library path?

thanks for your help.
Brian

File "/local-home/repository/cryosparcv211/cryosparc2_worker/deps/anaconda/lib/python2.7/importlib/init.py", line 37, in import_module import(name) File quot;cryosparc2_worker/cryosparc2_compute/jobs/refine/run.py", line 15, in init cryosparc2_compute.jobs.refine.run

File "cryosparc2_compute/engine/init.py", line 8, in <module> from engine import * File "cryosparc2_worker/cryosparc2_compute/engine/engine.py", line 4, in init cryosparc2_compute.engine.engine File "/local-home/repository/cryosparcv211/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pycuda/driver.py", line 5, in <module> from pycuda._driver import * # noqa ImportError: libcurand.so.10.0: cannot open shared object file: No such file or directory

Path is showing old installation directory for cryosparc, not new one. Does the job itself retain path information for the software and not just the database and data? Completely removed old path and copied/replicated new software install to old location, wait for a user trial with a newly constructed job, but in the mean time the command_vis daemon continues to restart.

Hope your issue resolved, for some reason our install was referencing files in a prior attempt in a different partition, added a few soft links, but don’t understand the root issue, just glad it seems to be running (small sample size, but no new complaints).