Live Picking overwrite

In cryosparcLive 4.2.1 and before, I would export particles to cryosparc and perform 2D and 3D operations. At some time later, I might choose to template pick with new templates in Live. Since this is in the same session, does this overwrite (instead of add to new directory) all particles information? So restarting Live and adjusting picker then immediately removes all prior work in the workspace? Or is there a way to recover?

a specific error might look like this: The requested frame/particle cannot be accessed. The file may be corrupt, or there may be a mismatch between the file and its associated metadata (i.e. cryosparc .cs file).

Please can you provide additional context:

  1. job type
  2. preceding lines

where you observed

Below the whole error minus identifier.

Basically I’m wondering what is the expected behavior. This is NU-refinement of particles which were previously handled for ~70 jobs. Later, I “redid” particle picking in Live, and exported those new particles. Now, I no longer have access to these ones. I have seen this several times before, you’d expect I’d learn my lesson. Fixing the problem now with fresh Extraction of these particles. I’m just getting clarity whether the expected function is to only have one, current, set of particles per LIVE session. What happens to the old particles when we switch to a new picker and reprocess? If it’s an overwrite, should we then create a new session if we want to try a new picking strategy? And is there a way not to redo CTF/Motion etc.?

Thanks!

Traceback (most recent call last):
File “/cryosparc/cryosparc_worker/cryosparc_compute/jobs/runcommon.py”, line 2061, in run_with_except_hook
run_old(*args, **kw)
File “cryosparc_master/cryosparc_compute/engine/cuda_core.py”, line 131, in cryosparc_compute.engine.cuda_core.GPUThread.run
File “cryosparc_master/cryosparc_compute/engine/cuda_core.py”, line 132, in cryosparc_compute.engine.cuda_core.GPUThread.run
File “cryosparc_master/cryosparc_compute/engine/engine.py”, line 1028, in cryosparc_compute.engine.engine.process.work
File “cryosparc_master/cryosparc_compute/engine/engine.py”, line 87, in cryosparc_compute.engine.engine.EngineThread.load_image_data_gpu
File “/cryosparc/cryosparc_worker/cryosparc_compute/particles.py”, line 33, in get_original_real_data
return self.blob.view().copy()
File “/cryosparc/cryosparc_worker/cryosparc_compute/blobio/mrc.py”, line 127, in view
return self.get()
File “/cryosparc/cryosparc_worker/cryosparc_compute/blobio/mrc.py”, line 122, in get
_, data, total_time = prefetch.synchronous_native_read(self.fname, idx_start = self.page, idx_limit = self.page+1)
File “cryosparc_master/cryosparc_compute/blobio/prefetch.py”, line 68, in cryosparc_compute.blobio.prefetch.synchronous_native_read
RuntimeError: Error ocurred (Invalid argument) at line 562 in mrc_readmic (1)

The requested frame/particle cannot be accessed. The file may be corrupt, or there may be a mismatch between the file and its associated metadata (i.e. cryosparc .cs file).

IO request details:
filename: [redacted]
filetype: 0
header_only: 0
idx_start: 1083
idx_limit: 1084
eer_upsampfactor: 2
eer_numfractions: 40
num_threads: 6
buffer: (nil)
nx, ny, nz: 0 0 0
dtype: 0
total_time: -1.000000

Here is what I understand after asking the team about your observation: the failure to access particles is the consequence of a design choice (for preservation of data storage) to not copy particle blobs from the Live session.
If the Live session’s particle blobs still existed, a Restack Particles job would create new blobs that should “survive” renewed picking in the upstream Live session, and could be connected to downstream jobs.
If the Live session’s particle blobs have already been removed or overwritten, you might still have the option of connecting the “stale” exported particles to an Extract from Micrographs job, but I haven’t tried this.

Thank you for the info. The latter worked, whereas the former failed with the same error. Good to know in the future to simply Template Pick in the standard cryosparc environment rather than the Live session if you want to continue to access existing particles in the workspace.