Homogeneous refinement error after heterogeneous refinement

Dear Colleagues,

We are processing data imported from RELION 3D classification. After motion correction, CTF estimation, particle picking and cleaning in cryosparc, we exported the particles to RELION for 3D classification. The particles in distinct STAR files corresponding to different RELION 3D classes were imported back into cryosparc, then subjected to ab initio model generation, and heterogeneous refinement. No problems till this stage. However, when we attempt to perform homogeneous refinement of the largest heterogeneous class, we run into the error below. Any advise will be greatly appreciated. Thank you.


CPU:  334.7 MB]

Traceback (most recent call last):
  File "cryosparc_master/cryosparc_compute/run.py", line 96, in cryosparc_compute.run.main
  File "cryosparc_master/cryosparc_compute/jobs/refine/newrun.py", line 125, in cryosparc_compute.jobs.refine.newrun.run_homo_refine
  File "/ibbr/cryosparc/cryosparc_worker/cryosparc_compute/particles.py", line 114, in read_blobs
    u_blob_paths = cache.download_and_return_cache_paths(u_rel_paths)
  File "/ibbr/cryosparc/cryosparc_worker/cryosparc_compute/jobs/cache.py", line 112, in download_and_return_cache_paths
    compressed_keys = get_compressed_keys(worker_hostname, rel_paths)
  File "/ibbr/cryosparc/cryosparc_worker/cryosparc_compute/jobs/cache.py", line 285, in get_compressed_keys
    compressed_keys = rc.cli.cache_request_check(worker_hostname, rc._project_uid, rc._job_uid, com.compress_paths(rel_paths))
  File "/ibbr/cryosparc/cryosparc_worker/cryosparc_tools/cryosparc/command.py", line 104, in func
    with make_json_request(self, "/api", data=data) as request:
  File "/ibbr/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/contextlib.py", line 113, in __enter__
    return next(self.gen)
  File "/ibbr/cryosparc/cryosparc_worker/cryosparc_tools/cryosparc/command.py", line 165, in make_request
    with urlopen(request, timeout=client._timeout) as response:
  File "/ibbr/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/urllib/request.py", line 222, in urlopen
    return opener.open(url, data, timeout)
  File "/ibbr/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/urllib/request.py", line 525, in open
    response = self._open(req, data)
  File "/ibbr/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/urllib/request.py", line 542, in _open
    result = self._call_chain(self.handle_open, protocol, protocol +
  File "/ibbr/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/urllib/request.py", line 502, in _call_chain
    result = func(*args)
  File "/ibbr/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/urllib/request.py", line 1383, in http_open
    return self.do_open(http.client.HTTPConnection, req)
  File "/ibbr/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/urllib/request.py", line 1358, in do_open
    r = h.getresponse()
  File "/ibbr/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/http/client.py", line 1348, in getresponse
    response.begin()
  File "/ibbr/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/http/client.py", line 316, in begin
    version, status, reason = self._read_status()
  File "/ibbr/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/http/client.py", line 277, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "/ibbr/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/socket.py", line 669, in readinto
    return self._sock.recv_into(b)
socket.timeout: timed out

What version and patch of CryoSPARC produced this error?
If they were not at least v4.2.1 and 230427, respectively, please can you

  1. update
  2. patch (patches are version-specific)
  3. try this job again

Hello,

The error persists even after the patch. But we are running an internal test with a different dataset. I will post an update here soon. Thanks.