Strange error after extraction in 4.5.1

Hi,

I encountered a very strange error that never happened before the update to 4.5.1. After extraction, which finishes successfully, I run 2D classification, and after various amounts of iterations, I get:

 [CPU:   4.16 GB  Avail: 244.36 GB]

Traceback (most recent call last):
  File "/home/cryosparc_user/cryosparc_gpu1/cryosparc_worker/cryosparc_compute/jobs/runcommon.py", line 2294, in run_with_except_hook
    run_old(*args, **kw)
  File "cryosparc_master/cryosparc_compute/gpu/gpucore.py", line 134, in cryosparc_master.cryosparc_compute.gpu.gpucore.GPUThread.run
  File "cryosparc_master/cryosparc_compute/gpu/gpucore.py", line 135, in cryosparc_master.cryosparc_compute.gpu.gpucore.GPUThread.run
  File "cryosparc_master/cryosparc_compute/jobs/class2D/newrun.py", line 619, in cryosparc_master.cryosparc_compute.jobs.class2D.newrun.class2D_engine_run.work
  File "cryosparc_master/cryosparc_compute/engine/newengine.py", line 550, in cryosparc_master.cryosparc_compute.engine.newengine.EngineThread.read_image_data
  File "/home/cryosparc_user/cryosparc_gpu1/cryosparc_worker/cryosparc_compute/particles.py", line 34, in get_original_real_data
    data = self.blob.view()
  File "/home/cryosparc_user/cryosparc_gpu1/cryosparc_worker/cryosparc_compute/blobio/mrc.py", line 145, in view
    return self.get()
  File "/home/cryosparc_user/cryosparc_gpu1/cryosparc_worker/cryosparc_compute/blobio/mrc.py", line 140, in get
    _, data, total_time = prefetch.synchronous_native_read(self.fname, idx_start = self.page, idx_limit = self.page+1)
  File "cryosparc_master/cryosparc_compute/blobio/prefetch.py", line 82, in cryosparc_master.cryosparc_compute.blobio.prefetch.synchronous_native_read
OSError: 

IO request details:
Error ocurred (Invalid argument) at line 680 in mrc_readmic (1) 

The requested frame/particle cannot be accessed. The file may be corrupt, or there may be a mismatch between the file and its associated metadata (i.e. cryosparc .cs file).

filename:    /mnt/scratch/cryosparc/instance_10.0.90.83:39001/links/P4-J1327-1715432654/f18c26f6c0c700292a4ee93b729ae367572998e0.mrc
filetype:    0
header_only: 0
idx_start:   83
idx_limit:   84
eer_upsampfactor: 2
eer_numfractions: 40
num_threads: 6
buffer:      (nil)
buffer_sz:   0
nx, ny, nz:  0 0 0
dtype:       0
total_time:  -1.000000
io_time:     0.000000

I also ran the check for the corrupt particles job, but it did not return any errors. I used this dataset before, and everything went well. The short investigation shows that it is rather the problem with extraction than the SSD copy, as the non-SSD run also fails. I would appreciate some ideas on how to fix it.

PS: The file itself in not empty:

-rw-rw-r-- 1 cryosparc_user domain_users 2.6M May 11 00:41 /mnt/scratch/cryosparc/instance_10.0.90.83:39001//store-v2/f1/f18c26f6c0c700292a4ee93b729ae367572998e0

and seems to have correct MRC data in it:

mrcfile.open('/mnt/scratch/cryosparc/instance_10.0.90.83:39001//store-v2/f1/f18c26f6c0c700292a4ee93b729ae367572998e0').data
array([[[-0.545   ,  0.6787  , -0.906   , ...,  0.8677  ,  0.2632  ,
         -0.171   ],
        [ 0.00933 , -0.1229  , -0.429   , ..., -0.6714  , -0.2866  ,
         -0.03214 ],
        [ 0.373   ,  0.2352  ,  0.4573  , ...,  0.9834  , -0.656   ,
          0.2996  ],
        ...,
        [-0.1368  , -0.7676  ,  0.0905  , ...,  0.2947  ,  0.1769  ,
          0.2515  ],
        [-1.277   ,  0.01749 , -0.1985  , ..., -0.1833  ,  0.02    ,
         -0.1722  ],
        [-0.568   , -0.7456  , -0.0936  , ...,  0.1059  ,  0.4585  ,
          0.3828  ]],

       [[-0.1036  , -0.00622 ,  1.105   , ...,  1.425   ,  0.4885  ,
         -0.437   ],
        [ 0.2231  ,  0.5845  , -0.8555  , ..., -0.5776  , -0.6157  ,
          0.06976 ],
        [-0.5815  ,  0.1168  , -0.2161  , ..., -0.02008 ,  0.3801  ,
         -0.1897  ],
        ...,
        [-0.569   , -0.08795 , -0.4204  , ..., -0.4531  , -0.03506 ,
         -0.01866 ],
        [ 0.784   , -0.682   , -0.0538  , ..., -0.433   ,  0.1461  ,
         -0.2615  ],
        [-0.7603  ,  0.11676 , -0.10974 , ...,  0.3801  , -0.1593  ,
          0.4434  ]],

       [[ 0.5557  , -1.033   ,  0.5127  , ...,  0.3188  ,  0.115   ,
         -0.2216  ],
        [-0.5664  , -0.001858,  0.2634  , ...,  0.1373  , -0.1454  ,
          0.7183  ],
        [ 0.0503  ,  0.2732  , -1.192   , ..., -0.3132  , -0.1333  ,
          0.5625  ],
        ...,
        [ 1.128   ,  0.0885  , -0.1423  , ...,  0.3215  , -0.5073  ,
          0.539   ],
        [-0.264   , -0.653   ,  0.3455  , ...,  0.3254  ,  0.869   ,
          0.2727  ],
        [-0.412   ,  0.3765  , -0.10205 , ...,  0.6997  ,  0.0564  ,
          0.77    ]],

       ...,

       [[ 0.0524  ,  0.01288 ,  0.714   , ..., -0.1938  , -0.9307  ,
          0.633   ],
        [ 0.3123  ,  0.2301  , -0.2915  , ...,  0.405   , -0.2133  ,
          0.4438  ],
        [ 0.3967  ,  0.5156  , -0.3208  , ...,  0.1609  ,  0.516   ,
         -0.0394  ],
        ...,
        [ 0.03418 ,  0.245   , -0.2365  , ...,  0.4922  , -0.343   ,
          0.014336],
        [-0.9146  , -0.02989 , -0.01746 , ..., -0.4229  ,  0.1776  ,
          0.1112  ],
        [ 0.361   ,  0.7344  , -0.4753  , ...,  0.462   , -0.3662  ,
          0.5327  ]],

       [[ 1.13    , -0.02214 ,  1.055   , ..., -0.5215  ,  0.442   ,
          0.3564  ],
        [-0.527   ,  0.587   , -0.3025  , ..., -0.5864  , -0.1426  ,
         -0.2998  ],
        [ 0.08997 ,  0.3374  , -0.3252  , ...,  0.407   ,  0.2761  ,
         -0.003357],
        ...,
        [-0.007282, -0.3916  ,  0.2644  , ..., -0.12396 ,  0.2969  ,
          0.3967  ],
        [-0.05515 ,  0.4001  ,  0.1183  , ...,  0.1626  ,  0.321   ,
          0.8013  ],
        [-0.292   ,  0.6875  , -0.00401 , ...,  0.2888  ,  0.16    ,
          0.02405 ]],

       [[ 0.318   , -0.6143  , -0.4092  , ..., -0.307   ,  1.173   ,
          0.1466  ],
        [ 0.01765 ,  0.381   ,  1.221   , ..., -0.1182  , -0.03516 ,
          0.4124  ],
        [-0.0908  , -0.1321  , -0.2008  , ...,  0.3044  ,  0.651   ,
          0.293   ],
        ...,
        [ 0.4333  ,  0.2445  , -0.3245  , ..., -0.4429  , -0.2135  ,
          0.1926  ],
        [-0.4136  ,  0.2413  ,  0.5254  , ...,  0.2388  ,  0.8677  ,
          0.00493 ],
        [-0.4097  ,  0.563   , -0.2517  , ...,  0.345   , -0.5933  ,
          0.05396 ]]], dtype=float16)
>>> mrcfile.open('/mnt/scratch/cryosparc/instance_10.0.90.83:39001//store-v2/f1/f18c26f6c0c700292a4ee93b729ae367572998e0').data.shape
(83, 128, 128)

PS2: Removing the faulty particle mrc file from CS files fixed the problem. Interestingly, the same error happened 3 times (2X restarted extract job using CPU, 1X GPU extraction). Previously, the same micrographs never showed an error after extraction.

Best,
Dawid

Thanks @dzyla for reporting this problem.
Would you be willing to share some information with the CryoSPARC developers:

  1. the faulty particle file
  2. the *.cs file from particle extraction (before removing the reference to the faulty mrc file
  3. available job reports of the particle extraction jobs that produced the faulty mrc file
    Please let us know by email if you can share these files with us using your own (or own institution’s) file sharing platform or whether we should make arrangements for you to upload the files to our server.

I have the particle file and CS files; however, the job report is missing (which is essentially what I have already attached). Could you please provide me with an email address I could use to share the files? Thank you!

@dzyla I sent you a private message two days ago. Did you receive it?

Hello @wtempel,
I was wondering if you were able to resolve this issue. We ran into this issue today in two separate Extraction Jobs (Same Micrographs (5000) but different Template Particle Picking Jobs).

It appears each job produces more than one faulty particle as we get a different a distinct “The requested frame/particle cannot be accessed.” when we run 2D Classification.
Previous extraction/2D classification jobs with a subset of the micrographs (50) and manually/template picked particles worked fine.

Thanks and kind regards,
Guillermo

Welcome to the forum @gcaballero.
Please can you confirm

  1. Does the issue with defective particle(s) become apparent only when you run 2D classification, not before?
  2. Did you enable Cache particle images on SSD for 2D classification?
  3. Does the error occur if you run 2D classification when you disable Cache particle images on SSD?
  4. Did you check the particle stacks for corruption with the Check for NaN values option enabled?

Thank you for the prompt response @wtempel.

  1. Correct, we did not have any issues until after re-extracting and running 2D Classifications.
  2. 2D Classifications were ran with “Cache particle images on SSD”.
  3. 2D Classifications were also ran (same parameters and particle stack) with “Cache particle images on SSD” disabled but this produces the same error.
  4. I also checked for corrupt particles (with “Check for NaN values” enabled) but this did not detect any issues.

I also re-ran older 2D classification Jobs and there does not seem to be a problem with that particle stack.

Please can you post the exact error messages for

including any particle file paths that may be shown.

Hi everyone,

Did you solved this problem? I encountered this error as well : after extract the particle and when running the 2D classification it came

[2024-05-30 20:47:52.00]
License is valid.

[2024-05-30 20:47:52.00]
Launching job on lane default target G09 ...

[2024-05-30 20:47:52.02]
Running job on master node hostname G09

[2024-05-30 20:47:55.62]
[CPU:  215.7 MB  Avail:  93.90 GB]
Job J61 Started

[2024-05-30 20:47:55.63]
[CPU:  215.7 MB  Avail:  93.90 GB]
Master running v4.4.1, worker running v4.4.1

[2024-05-30 20:47:55.63]
[CPU:  216.1 MB  Avail:  93.90 GB]
Working in directory: /data10/kzhang/cryosparc/CS-/J61

[2024-05-30 20:47:55.64]
[CPU:  216.1 MB  Avail:  93.90 GB]
Running on lane default

[2024-05-30 20:47:55.64]
[CPU:  216.1 MB  Avail:  93.90 GB]
Resources allocated: 

[2024-05-30 20:47:55.64]
[CPU:  216.1 MB  Avail:  93.90 GB]
  Worker:  G09

[2024-05-30 20:47:55.64]
[CPU:  216.1 MB  Avail:  93.90 GB]
  CPU   :  [4, 5]

[2024-05-30 20:47:55.65]
[CPU:  216.1 MB  Avail:  93.90 GB]
  GPU   :  [2, 3]

[2024-05-30 20:47:55.65]
[CPU:  216.1 MB  Avail:  93.90 GB]
  RAM   :  [1, 7, 8]

[2024-05-30 20:47:55.65]
[CPU:  216.1 MB  Avail:  93.89 GB]
  SSD   :  True

[2024-05-30 20:47:55.65]
[CPU:  216.1 MB  Avail:  93.89 GB]
--------------------------------------------------------------

[2024-05-30 20:47:55.66]
[CPU:  216.1 MB  Avail:  93.89 GB]
Importing job module for job type class_2D_new...

[2024-05-30 20:47:56.07]
[CPU:  248.5 MB  Avail:  93.90 GB]
Job ready to run

[2024-05-30 20:47:56.07]
[CPU:  248.5 MB  Avail:  93.90 GB]
***************************************************************

[2024-05-30 20:48:03.38]
[CPU:   1.10 GB  Avail:  92.71 GB]
Using random seed of 421618501

[2024-05-30 20:48:03.38]
[CPU:   1.10 GB  Avail:  92.71 GB]
Loading a ParticleStack with 1281812 items...

[2024-05-30 20:48:03.51]
[CPU:   1.10 GB  Avail:  92.94 GB]
 SSD cache : using override environment variable for cache: $CRYOSPARC_SSD_PATH=/ssd/cryosparc_cache

[2024-05-30 20:48:03.59]
[CPU:   1.10 GB  Avail:  92.86 GB]
 SSD cache : cache successfully synced in_use

[2024-05-30 20:48:08.38]
[CPU:   1.10 GB  Avail:  90.98 GB]
 SSD cache : cache successfully synced, found 977,479.95 MB of files on SSD.

[2024-05-30 20:48:11.10]
[CPU:   1.10 GB  Avail:  90.86 GB]
 SSD cache : cache successfully requested to check 8097 files.

[2024-05-30 20:48:11.11]
[CPU:   1.10 GB  Avail:  90.86 GB]
Detected file change due to modification time.

[2024-05-30 20:48:14.04]
[CPU:   1.10 GB  Avail:  90.67 GB]
 SSD cache : cache requires 241,465 MB more on the SSD for files to be downloaded.

[2024-05-30 20:48:14.13]
[CPU:   1.10 GB  Avail:  90.52 GB]
 SSD cache : cache has enough available space.

[2024-05-30 20:48:14.13]
[CPU:   1.10 GB  Avail:  90.52 GB]
  Needed        |   241,464.91 MB
  Available     | 6,304,944.25 MB
  Disk size     | 7,176,117.00 MB
  Usable space  | 7,166,117.00 MB  (reserve 10,000 MB)

[2024-05-30 20:48:14.13]
[CPU:   1.10 GB  Avail:  90.52 GB]
Transferring across 2 threads: FoilHole_5974161_Data_2681309_2681311_20240506_014245_fractions_patch_aligned_doseweighted_particles.mrc (8097/8097)
  Progress      |      241,465 MB (100.00%)
  Total         |      241,465 MB
  Average speed |       355.64 MB/s
  ETA           |      0h  0m  0s

[2024-05-30 20:59:33.97]
[CPU:   1.11 GB  Avail:  89.82 GB]
 SSD cache : complete, all requested files are available on SSD.

[2024-05-30 20:59:47.96]
[CPU:   1.64 GB  Avail:  88.21 GB]
  Done.

[2024-05-30 20:59:47.97]
[CPU:   1.64 GB  Avail:  88.21 GB]
Windowing particles

[2024-05-30 20:59:47.97]
[CPU:   1.64 GB  Avail:  88.21 GB]
  Done.

[2024-05-30 20:59:48.03]
[CPU:   1.73 GB  Avail:  88.11 GB]
Using 200 classes.

[2024-05-30 20:59:48.04]
[CPU:   1.73 GB  Avail:  88.11 GB]
Computing 2D class averages: 

[2024-05-30 20:59:48.04]
[CPU:   1.73 GB  Avail:  88.11 GB]
  Volume Size: 256 (voxel size 1.93A)

[2024-05-30 20:59:48.04]
[CPU:   1.73 GB  Avail:  88.11 GB]
  Zeropadded Volume Size: 512

[2024-05-30 20:59:48.04]
[CPU:   1.73 GB  Avail:  88.11 GB]
  Data Size: 256 (pixel size 1.93A)

[2024-05-30 20:59:48.05]
[CPU:   1.73 GB  Avail:  88.11 GB]
  Using Reconstruction Resolution: 6.00A (82.0 radius)

[2024-05-30 20:59:48.05]
[CPU:   1.73 GB  Avail:  88.11 GB]
  Using Alignment Resolution: 6.00A (82.0 radius)

[2024-05-30 20:59:48.05]
[CPU:   1.73 GB  Avail:  88.10 GB]
  Windowing only corners of 2D classes at each iteration.

[2024-05-30 20:59:48.06]
[CPU:   1.73 GB  Avail:  88.10 GB]
Using random seed for initialization of 421618501

[2024-05-30 20:59:48.75]
[CPU:   1.81 GB  Avail:  87.95 GB]
  Done in 0.691s.

[2024-05-30 20:59:51.14]
[CPU:   2.32 GB  Avail:  87.78 GB]
Start of Iteration 0

[2024-05-30 20:59:51.15]
[CPU:   2.32 GB  Avail:  87.78 GB]


[2024-05-30 20:59:53.11]
[CPU:   3.19 GB  Avail:  86.99 GB]
Traceback (most recent call last):
  File "/spshared/apps/cryosparc4/cryosparc_worker/cryosparc_compute/jobs/runcommon.py", line 2192, in run_with_except_hook
    run_old(*args, **kw)
  File "cryosparc_master/cryosparc_compute/gpu/gpucore.py", line 134, in cryosparc_master.cryosparc_compute.gpu.gpucore.GPUThread.run
  File "cryosparc_master/cryosparc_compute/gpu/gpucore.py", line 135, in cryosparc_master.cryosparc_compute.gpu.gpucore.GPUThread.run
  File "cryosparc_master/cryosparc_compute/jobs/class2D/newrun.py", line 604, in cryosparc_master.cryosparc_compute.jobs.class2D.newrun.class2D_engine_run.work
  File "cryosparc_master/cryosparc_compute/engine/newengine.py", line 550, in cryosparc_master.cryosparc_compute.engine.newengine.EngineThread.read_image_data
  File "/spshared/apps/cryosparc4/cryosparc_worker/cryosparc_compute/particles.py", line 34, in get_original_real_data
    data = self.blob.view()
  File "/spshared/apps/cryosparc4/cryosparc_worker/cryosparc_compute/blobio/mrc.py", line 145, in view
    return self.get()
  File "/spshared/apps/cryosparc4/cryosparc_worker/cryosparc_compute/blobio/mrc.py", line 140, in get
    _, data, total_time = prefetch.synchronous_native_read(self.fname, idx_start = self.page, idx_limit = self.page+1)
  File "cryosparc_master/cryosparc_compute/blobio/prefetch.py", line 70, in cryosparc_master.cryosparc_compute.blobio.prefetch.synchronous_native_read
OSError: 

IO request details:
Error ocurred (Invalid argument) at line 681 in mrc_readmic (1) 

The requested frame/particle cannot be accessed. The file may be corrupt, or there may be a mismatch between the file and its associated metadata (i.e. cryosparc .cs file).

filename:    /ssd/cryosparc_cache/instance_G09:39001/projects/P6/J58/extract/FoilHole_1564340_Data_2681289_2681291_20240503_220420_fractions_patch_aligned_doseweighted_particles.mrc
filetype:    0
header_only: 0
idx_start:   33
idx_limit:   34
eer_upsampfactor: 2
eer_numfractions: 40
num_threads: 6
buffer:      (nil)
buffer_sz:   0
nx, ny, nz:  0 0 0
dtype:       0
total_time:  -1.000000
io_time:     0.000000

I re-run the extract from micrograph but the error is still the same.

I also tried to restrack particles, and comes a similar error

[2024-05-30 21:08:15.59]
License is valid.

[2024-05-30 21:08:15.59]
Launching job on lane default target G09 ...

[2024-05-30 21:08:15.61]
Running job on master node hostname G09

[2024-05-30 21:08:19.21]
[CPU:  216.7 MB  Avail:  98.30 GB]
Job J64 Started

[2024-05-30 21:08:19.22]
[CPU:  216.7 MB  Avail:  98.30 GB]
Master running v4.4.1, worker running v4.4.1

[2024-05-30 21:08:19.23]
[CPU:  217.0 MB  Avail:  98.30 GB]
Working in directory: /data10/kzhang/cryosparc/CS-/J64

[2024-05-30 21:08:19.23]
[CPU:  217.0 MB  Avail:  98.30 GB]
Running on lane default

[2024-05-30 21:08:19.23]
[CPU:  217.0 MB  Avail:  98.30 GB]
Resources allocated: 

[2024-05-30 21:08:19.24]
[CPU:  217.0 MB  Avail:  98.30 GB]
  Worker:  G09

[2024-05-30 21:08:19.24]
[CPU:  217.0 MB  Avail:  98.30 GB]
  CPU   :  [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]

[2024-05-30 21:08:19.24]
[CPU:  217.0 MB  Avail:  98.30 GB]
  GPU   :  []

[2024-05-30 21:08:19.25]
[CPU:  217.0 MB  Avail:  98.30 GB]
  RAM   :  [1, 7]

[2024-05-30 21:08:19.25]
[CPU:  217.0 MB  Avail:  98.30 GB]
  SSD   :  False

[2024-05-30 21:08:19.25]
[CPU:  217.0 MB  Avail:  98.30 GB]
--------------------------------------------------------------

[2024-05-30 21:08:19.25]
[CPU:  217.0 MB  Avail:  98.30 GB]
Importing job module for job type restack_particles...

[2024-05-30 21:08:21.43]
[CPU:  244.6 MB  Avail:  96.16 GB]
Job ready to run

[2024-05-30 21:08:21.43]
[CPU:  244.8 MB  Avail:  96.15 GB]
***************************************************************

[2024-05-30 21:08:48.76]
[CPU:   9.04 GB  Avail:  76.88 GB]
Done batch 1 of 129 in 11.57s

[2024-05-30 21:10:20.72]
[CPU:  29.73 GB  Avail:  57.78 GB]
Done batch 2 of 129 in 103.53s

[2024-05-30 21:10:38.24]
[CPU:  30.91 GB  Avail:  59.64 GB]
Done batch 3 of 129 in 121.06s

[2024-05-30 21:11:07.71]
[CPU:  30.37 GB  Avail:  59.04 GB]
Done batch 4 of 129 in 150.53s

[2024-05-30 21:11:11.75]
[CPU:  28.19 GB  Avail:  61.34 GB]
Done batch 5 of 129 in 154.56s

[2024-05-30 21:11:11.92]
[CPU:  25.71 GB  Avail:  63.82 GB]
Done batch 6 of 129 in 154.74s

[2024-05-30 21:11:12.23]
[CPU:  23.24 GB  Avail:  66.57 GB]
Done batch 7 of 129 in 155.04s

[2024-05-30 21:11:12.84]
[CPU:  20.84 GB  Avail:  68.86 GB]
Done batch 8 of 129 in 155.65s

[2024-05-30 21:11:14.92]
[CPU:  18.73 GB  Avail:  71.05 GB]
Done batch 9 of 129 in 146.14s

[2024-05-30 21:11:15.56]
[CPU:  16.36 GB  Avail:  73.12 GB]
Done batch 10 of 129 in 158.37s

[2024-05-30 21:11:15.67]
[CPU:  13.88 GB  Avail:  75.96 GB]
Done batch 11 of 129 in 158.48s

[2024-05-30 21:11:18.05]
[CPU:  12.06 GB  Avail:  77.59 GB]
Done batch 12 of 129 in 160.86s

[2024-05-30 21:11:20.21]
[CPU:  10.20 GB  Avail:  78.92 GB]
Done batch 13 of 129 in 107.43s

[2024-05-30 21:12:09.25]
[CPU:  17.90 GB  Avail:  70.31 GB]
Done batch 14 of 129 in 108.27s

[2024-05-30 21:12:35.53]
[CPU:  20.32 GB  Avail:  68.24 GB]
Done batch 15 of 129 in 117.12s

[2024-05-30 21:12:53.97]
[CPU:  22.54 GB  Avail:  68.77 GB]
Done batch 16 of 129 in 93.67s

[2024-05-30 21:12:54.96]
[CPU:  20.24 GB  Avail:  70.89 GB]
Done batch 17 of 129 in 103.12s

[2024-05-30 21:13:32.43]
[CPU:  25.33 GB  Avail:  65.16 GB]
Done batch 18 of 129 in 140.43s

[2024-05-30 21:13:41.67]
[CPU:  25.80 GB  Avail:  65.18 GB]
Done batch 19 of 129 in 149.37s

[2024-05-30 21:14:02.10]
[CPU:  25.89 GB  Avail:  67.21 GB]
Done batch 20 of 129 in 174.32s

[2024-05-30 21:14:08.62]
[CPU:  22.81 GB  Avail:  72.03 GB]
Done batch 21 of 129 in 172.85s

[2024-05-30 21:14:08.73]
[CPU:  20.32 GB  Avail:  75.00 GB]
Done batch 22 of 129 in 175.80s

[2024-05-30 21:14:08.84]
[CPU:  17.84 GB  Avail:  77.22 GB]
Done batch 23 of 129 in 173.85s

[2024-05-30 21:14:09.53]
[CPU:  15.56 GB  Avail:  77.79 GB]
Done batch 24 of 129 in 173.89s

[2024-05-30 21:14:18.95]
[CPU:  16.30 GB  Avail:  74.60 GB]
Done batch 25 of 129 in 180.82s

[2024-05-30 21:14:36.54]
[CPU:  18.98 GB  Avail:  74.19 GB]
Done batch 26 of 129 in 146.42s

[2024-05-30 21:14:44.00]
[CPU:  18.76 GB  Avail:  74.19 GB]
Done batch 27 of 129 in 108.97s

[2024-05-30 21:14:53.42]
[CPU:  18.93 GB  Avail:  73.92 GB]
Done batch 28 of 129 in 137.28s

[2024-05-30 21:16:02.68]
[CPU:  20.81 GB  Avail:  72.85 GB]
Done batch 29 of 129 in 119.15s

[2024-05-30 21:16:09.84]
[CPU:  20.93 GB  Avail:  72.40 GB]
Done batch 30 of 129 in 157.04s

[2024-05-30 21:17:33.91]
[CPU:  26.11 GB  Avail:  65.50 GB]
Done batch 31 of 129 in 205.21s

[2024-05-30 21:17:45.41]
[CPU:  26.28 GB  Avail:  64.92 GB]
Done batch 32 of 129 in 243.61s

[2024-05-30 21:17:56.32]
[CPU:  25.88 GB  Avail:  65.65 GB]
Done batch 33 of 129 in 199.11s

[2024-05-30 21:17:59.08]
[CPU:  24.16 GB  Avail:  67.16 GB]
Done batch 34 of 129 in 195.03s

[2024-05-30 21:18:05.81]
[CPU:  23.85 GB  Avail:  67.53 GB]
Done batch 35 of 129 in 237.01s

[2024-05-30 21:18:12.62]
[CPU:  21.40 GB  Avail:  70.03 GB]
Done batch 36 of 129 in 249.22s

[2024-05-30 21:18:13.60]
[CPU:  19.24 GB  Avail:  72.63 GB]
Done batch 37 of 129 in 234.58s

[2024-05-30 21:18:23.21]
[CPU:  19.15 GB  Avail:  72.33 GB]
Done batch 38 of 129 in 208.75s

[2024-05-30 21:18:28.36]
[CPU:  17.87 GB  Avail:  73.58 GB]
Done batch 39 of 129 in 161.68s

[2024-05-30 21:18:34.87]
[CPU:  18.55 GB  Avail:  72.91 GB]
Done batch 40 of 129 in 150.11s

[2024-05-30 21:18:35.73]
[CPU:  16.67 GB  Avail:  74.80 GB]
Done batch 41 of 129 in 266.14s

[2024-05-30 21:18:46.80]
[CPU:  19.60 GB  Avail:  72.01 GB]
Done batch 42 of 129 in 72.78s

[2024-05-30 21:18:55.61]
[CPU:  19.56 GB  Avail:  72.00 GB]
Done batch 43 of 129 in 59.23s

[2024-05-30 21:19:04.73]
[CPU:  19.84 GB  Avail:  71.38 GB]
Done batch 44 of 129 in 74.17s

[2024-05-30 21:19:22.80]
[CPU:  20.56 GB  Avail:  70.77 GB]
Done batch 45 of 129 in 74.84s

[2024-05-30 21:20:07.44]
[CPU:  21.65 GB  Avail:  70.09 GB]
Done batch 46 of 129 in 114.77s

[2024-05-30 21:20:13.88]
[CPU:  21.54 GB  Avail:  69.80 GB]
Done batch 47 of 129 in 120.24s

[2024-05-30 21:20:25.38]
[CPU:  21.66 GB  Avail:  69.84 GB]
Done batch 48 of 129 in 122.10s

[2024-05-30 21:20:30.17]
[CPU:  19.67 GB  Avail:  70.73 GB]
Done batch 49 of 129 in 121.77s

[2024-05-30 21:20:41.62]
[CPU:  21.36 GB  Avail:  70.45 GB]
Done batch 50 of 129 in 138.23s

[2024-05-30 21:20:46.16]
[CPU:  19.24 GB  Avail:  72.16 GB]
Done batch 51 of 129 in 143.27s

[2024-05-30 21:20:48.45]
[CPU:  18.25 GB  Avail:  73.16 GB]
Done batch 52 of 129 in 133.54s

[2024-05-30 21:21:00.67]
[CPU:  20.90 GB  Avail:  70.71 GB]
Done batch 53 of 129 in 144.89s

[2024-05-30 21:21:09.28]
[CPU:  19.88 GB  Avail:  72.10 GB]
Done batch 54 of 129 in 139.94s

[2024-05-30 21:21:13.47]
[CPU:  19.30 GB  Avail:  72.44 GB]
Done batch 55 of 129 in 137.49s

[2024-05-30 21:21:20.59]
[CPU:  20.20 GB  Avail:  71.25 GB]
Done batch 56 of 129 in 133.39s

[2024-05-30 21:21:26.89]
[CPU:  18.82 GB  Avail:  72.86 GB]
Done batch 57 of 129 in 78.79s

[2024-05-30 21:21:38.02]
[CPU:  20.27 GB  Avail:  71.42 GB]
Done batch 58 of 129 in 68.51s

[2024-05-30 21:21:48.79]
[CPU:  20.21 GB  Avail:  71.55 GB]
Done batch 59 of 129 in 78.59s

[2024-05-30 21:21:57.40]
[CPU:  20.19 GB  Avail:  71.44 GB]
Done batch 60 of 129 in 101.30s

[2024-05-30 21:22:02.67]
[CPU:  18.79 GB  Avail:  72.84 GB]
Done batch 61 of 129 in 89.74s

[2024-05-30 21:22:13.22]
[CPU:  20.30 GB  Avail:  71.29 GB]
Done batch 62 of 129 in 84.75s

[2024-05-30 21:22:17.97]
[CPU:  18.62 GB  Avail:  73.25 GB]
Done batch 63 of 129 in 92.24s

[2024-05-30 21:22:32.03]
[CPU:  19.34 GB  Avail:  72.26 GB]
Done batch 64 of 129 in 87.58s

[2024-05-30 21:22:42.27]
[CPU:  20.13 GB  Avail:  71.75 GB]
Done batch 65 of 129 in 92.96s

[2024-05-30 21:22:48.61]
[CPU:  19.48 GB  Avail:  72.13 GB]
Done batch 66 of 129 in 84.00s

[2024-05-30 21:23:01.37]
[CPU:  19.64 GB  Avail:  71.97 GB]
Done batch 67 of 129 in 115.13s

[2024-05-30 21:23:13.81]
[CPU:  20.10 GB  Avail:  71.59 GB]
Done batch 68 of 129 in 120.32s

[2024-05-30 21:23:17.88]
[CPU:  17.70 GB  Avail:  72.16 GB]
Done batch 69 of 129 in 75.19s

[2024-05-30 21:23:21.25]
[CPU:  16.51 GB  Avail:  75.08 GB]
Done batch 70 of 129 in 63.26s

[2024-05-30 21:23:36.74]
[CPU:  18.39 GB  Avail:  73.50 GB]
Done batch 71 of 129 in 117.46s

[2024-05-30 21:23:51.87]
[CPU:  20.13 GB  Avail:  71.26 GB]
Done batch 72 of 129 in 122.76s

[2024-05-30 21:25:21.46]
[CPU:  22.56 GB  Avail:  69.34 GB]
Done batch 73 of 129 in 123.57s

[2024-05-30 21:25:26.84]
[CPU:  21.07 GB  Avail:  71.39 GB]
Done batch 74 of 129 in 184.47s

[2024-05-30 21:25:37.52]
[CPU:  21.76 GB  Avail:  70.08 GB]
Done batch 75 of 129 in 193.21s

[2024-05-30 21:25:53.48]
[CPU:  21.93 GB  Avail:  70.00 GB]
Done batch 76 of 129 in 189.23s

[2024-05-30 21:25:59.65]
[CPU:  20.79 GB  Avail:  71.34 GB]
Done batch 77 of 129 in 188.20s

[2024-05-30 21:26:02.02]
[CPU:  18.82 GB  Avail:  73.12 GB]
Done batch 78 of 129 in 180.64s

[2024-05-30 21:26:11.83]
[CPU:  18.78 GB  Avail:  73.41 GB]
Done batch 79 of 129 in 203.21s

[2024-05-30 21:26:28.59]
[CPU:  20.11 GB  Avail:  72.04 GB]
Done batch 80 of 129 in 190.95s

[2024-05-30 21:26:36.17]
[CPU:  19.39 GB  Avail:  72.51 GB]
Done batch 81 of 129 in 194.91s

[2024-05-30 21:26:37.13]
[CPU:  17.14 GB  Avail:  74.79 GB]
Done batch 82 of 129 in 193.58s

[2024-05-30 21:26:45.64]
[CPU:  16.97 GB  Avail:  74.87 GB]
Done batch 83 of 129 in 188.88s

[2024-05-30 21:27:00.69]
[CPU:  19.70 GB  Avail:  72.17 GB]
Done batch 84 of 129 in 186.32s

[2024-05-30 21:27:17.12]
[CPU:  20.86 GB  Avail:  69.04 GB]
Done batch 85 of 129 in 110.26s

[2024-05-30 21:27:24.33]
[CPU:  19.46 GB  Avail:  72.72 GB]
Done batch 86 of 129 in 106.78s

[2024-05-30 21:27:42.28]
[CPU:  20.88 GB  Avail:  70.92 GB]
Done batch 87 of 129 in 100.24s

[2024-05-30 21:27:43.83]
[CPU:  18.51 GB  Avail:  73.52 GB]
Done batch 88 of 129 in 109.72s

[2024-05-30 21:27:49.39]
[CPU:  17.65 GB  Avail:  74.35 GB]
Done batch 89 of 129 in 109.72s

[2024-05-30 21:28:01.99]
[CPU:  18.77 GB  Avail:  73.44 GB]
Done batch 90 of 129 in 110.14s

[2024-05-30 21:28:25.94]
[CPU:  20.84 GB  Avail:  71.02 GB]
Done batch 91 of 129 in 108.79s

[2024-05-30 21:28:29.45]
[CPU:  18.45 GB  Avail:  72.42 GB]
Done batch 92 of 129 in 120.83s

[2024-05-30 21:28:29.57]
[CPU:  15.98 GB  Avail:  74.70 GB]
Done batch 93 of 129 in 113.38s

[2024-05-30 21:28:36.57]
[CPU:  15.90 GB  Avail:  76.52 GB]
Done batch 94 of 129 in 110.91s

[2024-05-30 21:28:39.84]
[CPU:  14.60 GB  Avail:  77.66 GB]
Done batch 95 of 129 in 10.80s

[2024-05-30 21:28:58.64]
[CPU:  19.00 GB  Avail:  73.21 GB]
Done batch 96 of 129 in 94.30s

[2024-05-30 21:29:09.72]
[CPU:  17.35 GB  Avail:  74.62 GB]
Done batch 97 of 129 in 129.01s

[2024-05-30 21:29:12.92]
[CPU:  15.00 GB  Avail:  77.05 GB]
Done batch 98 of 129 in 110.96s

[2024-05-30 21:29:20.82]
[CPU:  12.94 GB  Avail:  79.00 GB]
Done batch 99 of 129 in 96.89s

[2024-05-30 21:29:20.83]
[CPU:  10.44 GB  Avail:  81.43 GB]
Done batch 100 of 129 in 91.43s

[2024-05-30 21:29:23.00]
[CPU:   7.94 GB  Avail:  84.19 GB]
Done batch 101 of 129 in 81.00s

[2024-05-30 21:29:27.07]
[CPU:   5.44 GB  Avail:  86.50 GB]
Done batch 102 of 129 in 57.48s

[2024-05-30 21:29:27.09]
[CPU:   2.94 GB  Avail:  88.99 GB]
Traceback (most recent call last):
  File "cryosparc_master/cryosparc_compute/run.py", line 95, in cryosparc_master.cryosparc_compute.run.main
  File "/spshared/apps/cryosparc4/cryosparc_worker/cryosparc_compute/jobs/extract/run.py", line 1083, in run_restack_particles
    pool.map(restack_batch, enumerate(batches))
  File "/spshared/apps/cryosparc4/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/multiprocessing/pool.py", line 364, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "/spshared/apps/cryosparc4/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/multiprocessing/pool.py", line 771, in get
    raise self._value
  File "/spshared/apps/cryosparc4/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/multiprocessing/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "/spshared/apps/cryosparc4/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/multiprocessing/pool.py", line 48, in mapstar
    return list(map(*args))
  File "/spshared/apps/cryosparc4/cryosparc_worker/cryosparc_compute/jobs/extract/run.py", line 1069, in restack_batch
    data[idx] = particle.rawdata.view() # reads data
  File "/spshared/apps/cryosparc4/cryosparc_worker/cryosparc_compute/blobio/mrc.py", line 145, in view
    return self.get()
  File "/spshared/apps/cryosparc4/cryosparc_worker/cryosparc_compute/blobio/mrc.py", line 140, in get
    _, data, total_time = prefetch.synchronous_native_read(self.fname, idx_start = self.page, idx_limit = self.page+1)
  File "cryosparc_master/cryosparc_compute/blobio/prefetch.py", line 70, in cryosparc_master.cryosparc_compute.blobio.prefetch.synchronous_native_read
OSError: 

IO request details:
Error ocurred (Invalid argument) at line 681 in mrc_readmic (1) 

The requested frame/particle cannot be accessed. The file may be corrupt, or there may be a mismatch between the file and its associated metadata (i.e. cryosparc .cs file).

filename:    /data10/kzhang/cryosparc/CS-/J58/extract/FoilHole_1562347_Data_2681297_2681299_20240504_211926_fractions_patch_aligned_doseweighted_particles.mrc
filetype:    0
header_only: 0
idx_start:   133
idx_limit:   134
eer_upsampfactor: 2
eer_numfractions: 40
num_threads: 6
buffer:      (nil)
buffer_sz:   0
nx, ny, nz:  0 0 0
dtype:       0
total_time:  -1.000000
io_time:     0.000000

Any insight about this?

Best
Jinlun

@MiaoXiaoPu1121 Please can you post the outputs of these commands:

ls -l /ssd/cryosparc_cache/instance_G09\:39001/projects/P6/J58/extract/FoilHole_1564340_Data_2681289_2681291_20240503_220420_fractions_patch_aligned_doseweighted_particles.mrc
ls -l /data10/kzhang/cryosparc/CS-/J58/extract/FoilHole_1564340_Data_2681289_2681291_20240503_220420_fractions_patch_aligned_doseweighted_particles.mrc
ls -l /data10/kzhang/cryosparc/CS-/J58/extract/FoilHole_1562347_Data_2681297_2681299_20240504_211926_fractions_patch_aligned_doseweighted_particles.mrc
ls -l /data10/kzhang/cryosparc/CS-/J58/extract/*_fractions_patch_aligned_doseweighted_particles.mrc | tail -n 10

You may also

hi, templel

This is the ouput

[spuser@G09] 604% ls /ssd/cryosparc_cache/instance_G09:39001/projects/P6/J58/extract/FoilHole_1564340_Data_2681289_2681291_20240503_220420_fractions_patch_aligned_doseweighted_particles.mrc
/ssd/cryosparc_cache/instance_G09:39001/projects/P6/J58/extract/FoilHole_1564340_Data_2681289_2681291_20240503_220420_fractions_patch_aligned_doseweighted_particles.mrc
[spuser@G09] 605% ls -l /data10/kzhang/cryosparc/CS-/J58/extract/FoilHole_1564340_Data_2681289_2681291_20240503_220420_fractions_patch_aligned_doseweighted_particles.mrc
-rw-rw-r--. 1 spuser spuser 7078912 May 31 05:59 /data10/kzhang/cryosparc/CS-/J58/extract/FoilHole_1564340_Data_2681289_2681291_20240503_220420_fractions_patch_aligned_doseweighted_particles.mrc
[spuser@G09] 606% ls -l /data10/kzhang/cryosparc/CS-/J58/extract/FoilHole_1562347_Data_2681297_2681299_20240504_211926_fractions_patch_aligned_doseweighted_particles.mrc
-rw-rw-r--. 1 spuser spuser 34866176 May 31 05:50 /data10/kzhang/cryosparc/CS-/J58/extract/FoilHole_1562347_Data_2681297_2681299_20240504_211926_fractions_patch_aligned_doseweighted_particles.mrc
[spuser@G09] 607% ls -l /data10/kzhang/cryosparc/CS-/J58/extract/*_fractions_patch_aligned_doseweighted_particles.mrc | tail -n 10                               -rw-rw-r--. 1 spuser spuser 31458304 May 31 06:18 /data10/kzhang/cryosparc/CS-/J58/extract/FoilHole_5974060_Data_2681309_2681311_20240506_013605_fractions_patch_aligned_doseweighted_particles.mrc
-rw-rw-r--. 1 spuser spuser 33293312 May 31 06:18 /data10/kzhang/cryosparc/CS-/J58/extract/FoilHole_5974074_Data_2681258_2681260_20240506_013736_fractions_patch_aligned_doseweighted_particles.mrc
-rw-rw-r--. 1 spuser spuser 26477568 May 31 06:18 /data10/kzhang/cryosparc/CS-/J58/extract/FoilHole_5974153_Data_2681258_2681260_20240506_013925_fractions_patch_aligned_doseweighted_particles.mrc
-rw-rw-r--. 1 spuser spuser 34079744 May 31 06:18 /data10/kzhang/cryosparc/CS-/J58/extract/FoilHole_5974153_Data_2681289_2681291_20240506_013915_fractions_patch_aligned_doseweighted_particles.mrc
-rw-rw-r--. 1 spuser spuser 27001856 May 31 06:18 /data10/kzhang/cryosparc/CS-/J58/extract/FoilHole_5974153_Data_2681297_2681299_20240506_013934_fractions_patch_aligned_doseweighted_particles.mrc
-rw-rw-r--. 1 spuser spuser 31196160 May 31 06:18 /data10/kzhang/cryosparc/CS-/J58/extract/FoilHole_5974157_Data_2681258_2681260_20240506_014100_fractions_patch_aligned_doseweighted_particles.mrc
-rw-rw-r--. 1 spuser spuser 35914752 May 31 06:18 /data10/kzhang/cryosparc/CS-/J58/extract/FoilHole_5974157_Data_2681289_2681291_20240506_014050_fractions_patch_aligned_doseweighted_particles.mrc
-rw-rw-r--. 1 spuser spuser 30934016 May 31 06:18 /data10/kzhang/cryosparc/CS-/J58/extract/FoilHole_5974161_Data_2681289_2681291_20240506_014217_fractions_patch_aligned_doseweighted_particles.mrc
-rw-rw-r--. 1 spuser spuser 30147584 May 31 06:18 /data10/kzhang/cryosparc/CS-/J58/extract/FoilHole_5974161_Data_2681297_2681299_20240506_014236_fractions_patch_aligned_doseweighted_particles.mrc
-rw-rw-r--. 1 spuser spuser 32244736 May 31 06:18 /data10/kzhang/cryosparc/CS-/J58/extract/FoilHole_5974161_Data_2681309_2681311_20240506_014245_fractions_patch_aligned_doseweighted_particles.mrc

And for the “You may also” Part

  1. Yes, I did the motion correction and CTF estimation in live mode and export the exposures to pick particle and inspect particle and extract particle, everything is good. Until I began with 2D classfication , the error then shows up.

  2. Yes I enabled the Cache particle images on SSD for 2D classification

  3. Yes, the error still occur

    [2024-05-31 12:46:12.75]
    License is valid.
    
    [2024-05-31 12:46:12.75]
    Launching job on lane default target G09 ...
    
    [2024-05-31 12:46:12.76]
    Running job on master node hostname G09
    
    [2024-05-31 12:46:16.11]
    [CPU:  217.9 MB  Avail: 105.87 GB]
    Job J67 Started
    
    [2024-05-31 12:46:16.12]
    [CPU:  217.9 MB  Avail: 105.87 GB]
    Master running v4.4.1, worker running v4.4.1
    
    [2024-05-31 12:46:16.13]
    [CPU:  218.0 MB  Avail: 105.87 GB]
    Working in directory: /data10/kzhang/cryosparc/CS-/J67
    
    [2024-05-31 12:46:16.13]
    [CPU:  218.0 MB  Avail: 105.87 GB]
    Running on lane default
    
    [2024-05-31 12:46:16.13]
    [CPU:  218.0 MB  Avail: 105.87 GB]
    Resources allocated: 
    
    [2024-05-31 12:46:16.14]
    [CPU:  218.0 MB  Avail: 105.87 GB]
      Worker:  G09
    
    [2024-05-31 12:46:16.14]
    [CPU:  218.0 MB  Avail: 105.87 GB]
      CPU   :  [0, 1]
    
    [2024-05-31 12:46:16.14]
    [CPU:  218.0 MB  Avail: 105.87 GB]
      GPU   :  [0, 1]
    
    [2024-05-31 12:46:16.14]
    [CPU:  218.0 MB  Avail: 105.87 GB]
      RAM   :  [0, 1, 2]
    
    [2024-05-31 12:46:16.15]
    [CPU:  218.0 MB  Avail: 105.86 GB]
      SSD   :  False
    
    [2024-05-31 12:46:16.15]
    [CPU:  218.0 MB  Avail: 105.86 GB]
    --------------------------------------------------------------
    
    [2024-05-31 12:46:16.15]
    [CPU:  218.0 MB  Avail: 105.86 GB]
    Importing job module for job type class_2D_new...
    
    [2024-05-31 12:46:16.50]
    [CPU:  250.8 MB  Avail: 105.72 GB]
    Job ready to run
    
    [2024-05-31 12:46:16.51]
    [CPU:  250.8 MB  Avail: 105.72 GB]
    ***************************************************************
    
    [2024-05-31 12:46:23.44]
    [CPU:   1.10 GB  Avail: 104.86 GB]
    Using random seed of 1609687540
    
    [2024-05-31 12:46:23.44]
    [CPU:   1.10 GB  Avail: 104.86 GB]
    Loading a ParticleStack with 1281812 items...
    
    [2024-05-31 12:50:01.32]
    [CPU:   1.64 GB  Avail: 104.54 GB]
      Done.
    
    [2024-05-31 12:50:01.33]
    [CPU:   1.64 GB  Avail: 104.54 GB]
    Windowing particles
    
    [2024-05-31 12:50:01.33]
    [CPU:   1.64 GB  Avail: 104.54 GB]
      Done.
    
    [2024-05-31 12:50:01.40]
    [CPU:   1.74 GB  Avail: 104.40 GB]
    Using 200 classes.
    
    [2024-05-31 12:50:01.40]
    [CPU:   1.74 GB  Avail: 104.40 GB]
    Computing 2D class averages: 
    
    [2024-05-31 12:50:01.40]
    [CPU:   1.74 GB  Avail: 104.39 GB]
      Volume Size: 256 (voxel size 1.93A)
    
    [2024-05-31 12:50:01.41]
    [CPU:   1.74 GB  Avail: 104.39 GB]
      Zeropadded Volume Size: 512
    
    [2024-05-31 12:50:01.41]
    [CPU:   1.74 GB  Avail: 104.39 GB]
      Data Size: 256 (pixel size 1.93A)
    
    [2024-05-31 12:50:01.41]
    [CPU:   1.74 GB  Avail: 104.39 GB]
      Using Reconstruction Resolution: 6.00A (82.0 radius)
    
    [2024-05-31 12:50:01.41]
    [CPU:   1.74 GB  Avail: 104.39 GB]
      Using Alignment Resolution: 6.00A (82.0 radius)
    
    [2024-05-31 12:50:01.42]
    [CPU:   1.74 GB  Avail: 104.39 GB]
      Windowing only corners of 2D classes at each iteration.
    
    [2024-05-31 12:50:01.42]
    [CPU:   1.74 GB  Avail: 104.39 GB]
    Using random seed for initialization of 1609687540
    
    [2024-05-31 12:50:02.11]
    [CPU:   1.82 GB  Avail: 104.20 GB]
      Done in 0.688s.
    
    [2024-05-31 12:50:07.46]
    [CPU:   1.51 GB  Avail: 104.80 GB]
    Traceback (most recent call last):
      File "cryosparc_master/cryosparc_compute/run.py", line 95, in cryosparc_master.cryosparc_compute.run.main
      File "cryosparc_master/cryosparc_compute/jobs/class2D/newrun.py", line 195, in cryosparc_master.cryosparc_compute.jobs.class2D.newrun.run_class_2D
      File "/spshared/apps/cryosparc4/cryosparc_worker/cryosparc_compute/particles.py", line 46, in get_prepared_fspace_data
        return fourier.resample_fspace(fourier.fft(self.get_prepared_real_data()), self.dataset.N)
      File "/spshared/apps/cryosparc4/cryosparc_worker/cryosparc_compute/particles.py", line 41, in get_prepared_real_data
        return (self.dataset.prepare_normalization_factor * self['blob/sign']) * (self.dataset.prepare_real_window * (self.get_original_real_data()))
      File "/spshared/apps/cryosparc4/cryosparc_worker/cryosparc_compute/particles.py", line 34, in get_original_real_data
        data = self.blob.view()
      File "/spshared/apps/cryosparc4/cryosparc_worker/cryosparc_compute/blobio/mrc.py", line 145, in view
        return self.get()
      File "/spshared/apps/cryosparc4/cryosparc_worker/cryosparc_compute/blobio/mrc.py", line 140, in get
        _, data, total_time = prefetch.synchronous_native_read(self.fname, idx_start = self.page, idx_limit = self.page+1)
      File "cryosparc_master/cryosparc_compute/blobio/prefetch.py", line 70, in cryosparc_master.cryosparc_compute.blobio.prefetch.synchronous_native_read
    OSError: 
    
    IO request details:
    Error ocurred (Invalid argument) at line 681 in mrc_readmic (1) 
    
    The requested frame/particle cannot be accessed. The file may be corrupt, or there may be a mismatch between the file and its associated metadata (i.e. cryosparc .cs file).
    
    filename:    /data10/kzhang/cryosparc/CS-/J58/extract/FoilHole_1565088_Data_2681297_2681299_20240503_185018_fractions_patch_aligned_doseweighted_particles.mrc
    filetype:    0
    header_only: 0
    idx_start:   140
    idx_limit:   141
    eer_upsampfactor: 2
    eer_numfractions: 40
    num_threads: 6
    buffer:      (nil)
    buffer_sz:   0
    nx, ny, nz:  0 0 0
    dtype:       0
    total_time:  -1.000000
    io_time:     0.000000
    
  4. I tried the Check For Corrupt Particles and seems all particles are good

    [2024-05-31 13:13:16.54]
    [CPU:  359.6 MB  Avail: 106.01 GB]
    No corruption detected.
    
    [2024-05-31 13:13:16.99]
    [CPU:  221.2 MB  Avail: 106.55 GB]
    --------------------------------------------------------------
    
    [2024-05-31 13:13:16.99]
    [CPU:  221.2 MB  Avail: 106.56 GB]
    Compiling job outputs...
    
    [2024-05-31 13:13:17.00]
    [CPU:  221.2 MB  Avail: 106.57 GB]
    Passing through outputs for output group particles from input group particles
    
    [2024-05-31 13:13:17.47]
    [CPU:  347.7 MB  Avail: 106.49 GB]
    This job outputted results ['blob']
    
    [2024-05-31 13:13:17.47]
    [CPU:  347.7 MB  Avail: 106.49 GB]
      Loaded output dset with 1281812 items
    
    [2024-05-31 13:13:17.47]
    [CPU:  347.7 MB  Avail: 106.49 GB]
    Passthrough results ['location', 'pick_stats', 'ctf']
    
    [2024-05-31 13:13:19.38]
    [CPU:  760.9 MB  Avail: 106.01 GB]
      Loaded passthrough dset with 1281812 items
    
    [2024-05-31 13:13:19.93]
    [CPU:  782.0 MB  Avail: 105.99 GB]
      Intersection of output and passthrough has 1281812 items
    
    [2024-05-31 13:13:20.97]
    [CPU:  513.0 MB  Avail: 106.25 GB]
    Checking outputs for output group particles
    
    [2024-05-31 13:13:23.72]
    [CPU:  708.9 MB  Avail: 105.67 GB]
    Updating job size...
    
    [2024-05-31 13:13:23.73]
    [CPU:  708.9 MB  Avail: 105.67 GB]
    Exporting job and creating csg files...
    
    [2024-05-31 13:13:24.54]
    [CPU:  708.9 MB  Avail: 105.72 GB]
    ***************************************************************
    
    [2024-05-31 13:13:24.54]
    [CPU:  708.9 MB  Avail: 105.72 GB]
    Job complete. Total time 1538.23s
    

Any insights based on this ?

Best,
Jinlun

@MiaoXiaoPu1121 Would you like to share

  1. job reports for jobs J58 and J67
  2. files
    J58/extract/FoilHole_1564340_Data_2681289_2681291_20240503_220420_fractions_patch_aligned_doseweighted_particles.mrc
    J58/extract/FoilHole_1562347_Data_2681297_2681299_20240504_211926_fractions_patch_aligned_doseweighted_particles.mrc
    J58/extracted_particles.cs
    

with the CryoSPARC developers?
If you can share these files with us using your own (or own institution’s) file sharing platform, please send me a private message with the details. We alternatively can make arrangements for you to upload the files to our server.

Same error here after CPU & GPU particle extractions. Any suggestions how to proceed? Cryosparc 4.5.3

Traceback (most recent call last):
  File "/home/klin_csparc/Cryosparc3/cryosparc_worker/cryosparc_compute/jobs/runcommon.py", line 2294, in run_with_except_hook
    run_old(*args, **kw)
  File "/home/klin_csparc/Cryosparc3/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "cryosparc_master/cryosparc_compute/engine/newengine.py", line 2714, in cryosparc_master.cryosparc_compute.engine.newengine.process.work
  File "cryosparc_master/cryosparc_compute/engine/newengine.py", line 2736, in cryosparc_master.cryosparc_compute.engine.newengine.process.work
  File "cryosparc_master/cryosparc_compute/engine/newengine.py", line 550, in cryosparc_master.cryosparc_compute.engine.newengine.EngineThread.read_image_data
  File "/home/klin_csparc/Cryosparc3/cryosparc_worker/cryosparc_compute/particles.py", line 34, in get_original_real_data
    data = self.blob.view()
  File "/home/klin_csparc/Cryosparc3/cryosparc_worker/cryosparc_compute/blobio/mrc.py", line 145, in view
    return self.get()
  File "/home/klin_csparc/Cryosparc3/cryosparc_worker/cryosparc_compute/blobio/mrc.py", line 140, in get
    _, data, total_time = prefetch.synchronous_native_read(self.fname, idx_start = self.page, idx_limit = self.page+1)
  File "cryosparc_master/cryosparc_compute/blobio/prefetch.py", line 82, in cryosparc_master.cryosparc_compute.blobio.prefetch.synchronous_native_read
OSError: 

IO request details:
Error ocurred (Invalid argument) at line 680 in mrc_readmic (1) 

The requested frame/particle cannot be accessed. The file may be corrupt, or there may be a mismatch between the file and its associated metadata (i.e. cryosparc .cs file).

filename:    /ssd2/cryosparc_ssd/instance_skgpu08.rockefeller.edu:39001/links/P41-J126-1721284727/940c4fd91dcf423764eeddbdcae923803a74a3db.mrc
filetype:    0
header_only: 0
idx_start:   23
idx_limit:   24
eer_upsampfactor: 2
eer_numfractions: 40
num_threads: 6
buffer:      (nil)
buffer_sz:   0
nx, ny, nz:  0 0 0
dtype:       0
total_time:  -1.000000
io_time:     0.000000```

@rpiwowarcz Please can you post the output of these commands

  • on the relevant worker
    hostname -f
    stat -f /ssd2/cryosparc_ssd/
    df -h /ssd2/cryosparc_ssd/
    ls -l /ssd2/cryosparc_ssd/instance_skgpu08.rockefeller.edu:39001/links/P41-J126-1721284727/940c4fd91dcf423764eeddbdcae923803a74a3db.mrc
    
  • on the CryoSPARC master
    cryosparcm status | grep -e HOSTNAME
    cryosparcm cli "get_scheduler_targets()"
    cryosparcm cli "get_job('P99', 'J199', 'version', 'job_type', 'params_spec', 'status', 'instance_information', 'input_slot_groups')"
    

where you replace P99, J199 with the failed job’s project and job IDs, respectively.

You may also try running a clone of the failed job with Cache particle images on SSD disabled and let us know if a similar error occurs.
[Edited. Earlier version referred to incorrect job type.]

Thanks for the help. Here’s the original error after restarting the job this morning:

Traceback (most recent call last):
  File "/home/klin_csparc/Cryosparc3/cryosparc_worker/cryosparc_compute/jobs/runcommon.py", line 2294, in run_with_except_hook
    run_old(*args, **kw)
  File "cryosparc_master/cryosparc_compute/gpu/gpucore.py", line 134, in cryosparc_master.cryosparc_compute.gpu.gpucore.GPUThread.run
  File "cryosparc_master/cryosparc_compute/gpu/gpucore.py", line 135, in cryosparc_master.cryosparc_compute.gpu.gpucore.GPUThread.run
  File "cryosparc_master/cryosparc_compute/engine/engine.py", line 1080, in cryosparc_master.cryosparc_compute.engine.engine.process.work
  File "cryosparc_master/cryosparc_compute/engine/engine.py", line 129, in cryosparc_master.cryosparc_compute.engine.engine.EngineThread.load_image_data_gpu
  File "/home/klin_csparc/Cryosparc3/cryosparc_worker/cryosparc_compute/particles.py", line 34, in get_original_real_data
    data = self.blob.view()
  File "/home/klin_csparc/Cryosparc3/cryosparc_worker/cryosparc_compute/blobio/mrc.py", line 145, in view
    return self.get()
  File "/home/klin_csparc/Cryosparc3/cryosparc_worker/cryosparc_compute/blobio/mrc.py", line 140, in get
    _, data, total_time = prefetch.synchronous_native_read(self.fname, idx_start = self.page, idx_limit = self.page+1)
  File "cryosparc_master/cryosparc_compute/blobio/prefetch.py", line 82, in cryosparc_master.cryosparc_compute.blobio.prefetch.synchronous_native_read
OSError: 

IO request details:
Error ocurred (Invalid argument) at line 680 in mrc_readmic (1) 

The requested frame/particle cannot be accessed. The file may be corrupt, or there may be a mismatch between the file and its associated metadata (i.e. cryosparc .cs file).

filename:    /ssd2/cryosparc_ssd/instance_skgpu08.rockefeller.edu:39001/links/P41-J121-1721419103/24bcdae37c56e09db19303992d79ffde8ffc0295.mrc
filetype:    0
header_only: 0
idx_start:   23
idx_limit:   24
eer_upsampfactor: 2
eer_numfractions: 40
num_threads: 6
buffer:      (nil)
buffer_sz:   0
nx, ny, nz:  0 0 0
dtype:       0
total_time:  -1.000000
io_time:     0.000000

The worker and master are the same workstation. Here’s the output of requested commands:

rpiwowarcz@skgpu08:rpiwowarcz$ hostname -f
skgpu08.rockefeller.edu
rpiwowarcz@skgpu08:rpiwowarcz$ stat -f /ssd2/cryosparc_ssd/
  File: "/ssd2/cryosparc_ssd/"
    ID: fd0300000000 Namelen: 255     Type: xfs
Block size: 4096       Fundamental block size: 4096
Blocks: Total: 1874844160 Free: 1762072345 Available: 1762072345
Inodes: Total: 750146304  Free: 750111998
rpiwowarcz@skgpu08:rpiwowarcz$ df -h /ssd2/cryosparc_ssd/
Filesystem                      Size  Used Avail Use% Mounted on
/dev/mapper/vg_nvme2-vol_nvme2  7.0T  431G  6.6T   7% /ssd2
rpiwowarcz@skgpu08:rpiwowarcz$ ls -l /ssd2/cryosparc_ssd/instance_skgpu08.rockefeller.edu:39001/links/P41-J121-1721419103/24bcdae37c56e09db19303992d79ffde8ffc0295.mrc
lrwxrwxrwx 1 klin_csparc klin 58 Jul 19 11:58 /ssd2/cryosparc_ssd/instance_skgpu08.rockefeller.edu:39001/links/P41-J121-1721419103/24bcdae37c56e09db19303992d79ffde8ffc0295.mrc -> ../../store-v2/24/24bcdae37c56e09db19303992d79ffde8ffc0295
[klin_csparc@skgpu08 ~]$ cryosparcm status | grep -e HOSTNAME
export CRYOSPARC_MASTER_HOSTNAME="skgpu08.rockefeller.edu"
[klin_csparc@skgpu08 ~]$ cryosparcm cli "get_scheduler_targets()"
[{'cache_path': '/ssd2/cryosparc_ssd', 'cache_quota_mb': None, 'cache_reserve_mb': 10000, 'desc': None, 'gpus': [{'id': 0, 'mem': 42346610688, 'name': 'NVIDIA A100-PCIE-40GB'}, {'id': 1, 'mem': 42346610688, 'name': 'NVIDIA A100-PCIE-40GB'}, {'id': 2, 'mem': 42346610688, 'name': 'NVIDIA A100-PCIE-40GB'}, {'id': 3, 'mem': 42346610688, 'name': 'NVIDIA A100-PCIE-40GB'}], 'hostname': 'skgpu08.rockefeller.edu', 'lane': 'default', 'monitor_port': None, 'name': 'skgpu08.rockefeller.edu', 'resource_fixed': {'SSD': True}, 'resource_slots': {'CPU': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111], 'GPU': [0, 1, 2, 3], 'RAM': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96]}, 'ssh_str': 'klin_csparc@skgpu08.rockefeller.edu', 'title': 'Worker node skgpu08.rockefeller.edu', 'type': 'node', 'worker_bin_path': '/home/klin_csparc/Cryosparc3/cryosparc_worker/bin/cryosparcw'}]
[klin_csparc@skgpu08 ~]$ cryosparcm cli "get_job('P41', 'J121', 'version', 'job_type', 'params_spec', 'status', 'instance_information', 'input_slot_groups')"
{'_id': '66985dbfe48138f577ece893', 'input_slot_groups': [{'connections': [{'group_name': 'particles', 'job_uid': 'J119', 'slots': [{'group_name': 'particles', 'job_uid': 'J119', 'result_name': 'blob', 'result_type': 'particle.blob', 'slot_name': 'blob', 'version': 'F'}, {'group_name': 'particles', 'job_uid': 'J119', 'result_name': 'ctf', 'result_type': 'particle.ctf', 'slot_name': 'ctf', 'version': 'F'}, {'group_name': 'particles', 'job_uid': 'J119', 'result_name': 'location', 'result_type': 'particle.location', 'slot_name': None, 'version': 'F'}, {'group_name': 'particles', 'job_uid': 'J119', 'result_name': 'alignments2D', 'result_type': 'particle.alignments2D', 'slot_name': None, 'version': 'F'}, {'group_name': 'particles', 'job_uid': 'J119', 'result_name': 'pick_stats', 'result_type': 'particle.pick_stats', 'slot_name': None, 'version': 'F'}]}], 'count_max': inf, 'count_min': 1, 'description': 'Particle stacks to use. Multiple stacks will be concatenated.', 'name': 'particles', 'repeat_allowed': False, 'slots': [{'description': '', 'name': 'blob', 'optional': False, 'title': 'Particle data blobs', 'type': 'particle.blob'}, {'description': '', 'name': 'ctf', 'optional': False, 'title': 'Particle ctf parameters', 'type': 'particle.ctf'}, {'description': '', 'name': 'alignments3D', 'optional': True, 'title': 'Particle 3D alignments (optional)', 'type': 'particle.alignments3D'}], 'title': 'Particle stacks', 'type': 'particle'}, {'connections': [{'group_name': 'volume', 'job_uid': 'J85', 'slots': [{'group_name': 'volume', 'job_uid': 'J85', 'result_name': 'map', 'result_type': 'volume.blob', 'slot_name': 'map', 'version': 'F'}, {'group_name': 'volume', 'job_uid': 'J85', 'result_name': 'map_sharp', 'result_type': 'volume.blob', 'slot_name': None, 'version': 'F'}, {'group_name': 'volume', 'job_uid': 'J85', 'result_name': 'map_half_A', 'result_type': 'volume.blob', 'slot_name': None, 'version': 'F'}, {'group_name': 'volume', 'job_uid': 'J85', 'result_name': 'map_half_B', 'result_type': 'volume.blob', 'slot_name': None, 'version': 'F'}, {'group_name': 'volume', 'job_uid': 'J85', 'result_name': 'mask_refine', 'result_type': 'volume.blob', 'slot_name': None, 'version': 'F'}, {'group_name': 'volume', 'job_uid': 'J85', 'result_name': 'mask_fsc', 'result_type': 'volume.blob', 'slot_name': None, 'version': 'F'}, {'group_name': 'volume', 'job_uid': 'J85', 'result_name': 'mask_fsc_auto', 'result_type': 'volume.blob', 'slot_name': None, 'version': 'F'}, {'group_name': 'volume', 'job_uid': 'J85', 'result_name': 'precision', 'result_type': 'volume.blob', 'slot_name': None, 'version': 'F'}]}], 'count_max': 1, 'count_min': 1, 'description': '', 'name': 'volume', 'repeat_allowed': False, 'slots': [{'description': '', 'name': 'map', 'optional': False, 'title': 'Initial volume raw data', 'type': 'volume.blob'}], 'title': 'Initial volume', 'type': 'volume'}, {'connections': [], 'count_max': 1, 'count_min': 0, 'description': '', 'name': 'mask', 'repeat_allowed': False, 'slots': [{'description': '', 'name': 'mask', 'optional': False, 'title': 'Static mask', 'type': 'volume.blob'}], 'title': 'Static mask', 'type': 'mask'}], 'instance_information': {'CUDA_version': '11.8', 'available_memory': '662.16GB', 'cpu_model': 'Intel(R) Xeon(R) Gold 6258R CPU @ 2.70GHz', 'driver_version': '11.8', 'gpu_info': [{'id': 0, 'mem': 42346610688, 'name': 'NVIDIA A100-PCIE-40GB', 'pcie': '0000:18:00'}, {'id': 1, 'mem': 42346610688, 'name': 'NVIDIA A100-PCIE-40GB', 'pcie': '0000:3b:00'}, {'id': 2, 'mem': 42346610688, 'name': 'NVIDIA A100-PCIE-40GB', 'pcie': '0000:86:00'}, {'id': 3, 'mem': 42346610688, 'name': 'NVIDIA A100-PCIE-40GB', 'pcie': '0000:af:00'}], 'ofd_hard_limit': 4096, 'ofd_soft_limit': 1024, 'physical_cores': 56, 'platform_architecture': 'x86_64', 'platform_node': 'skgpu08.rockefeller.edu', 'platform_release': '3.10.0-1160.el7.x86_64', 'platform_version': '#1 SMP Mon Oct 19 16:18:59 UTC 2020', 'total_memory': '754.35GB', 'used_memory': '90.35GB'}, 'job_type': 'homo_refine_new', 'params_spec': {'refine_do_marg': {'value': True}, 'refine_num_final_iterations': {'value': 2}, 'refine_res_init': {'value': 12}, 'refine_scale_min': {'value': True}}, 'project_uid': 'P41', 'status': 'failed', 'uid': 'J121', 'version': 'v4.5.3'}

running the job without SSD results in a similar error.

Traceback (most recent call last):
  File "/home/klin_csparc/Cryosparc3/cryosparc_worker/cryosparc_compute/jobs/runcommon.py", line 2294, in run_with_except_hook
    run_old(*args, **kw)
  File "/home/klin_csparc/Cryosparc3/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "cryosparc_master/cryosparc_compute/engine/newengine.py", line 2714, in cryosparc_master.cryosparc_compute.engine.newengine.process.work
  File "cryosparc_master/cryosparc_compute/engine/newengine.py", line 2736, in cryosparc_master.cryosparc_compute.engine.newengine.process.work
  File "cryosparc_master/cryosparc_compute/engine/newengine.py", line 550, in cryosparc_master.cryosparc_compute.engine.newengine.EngineThread.read_image_data
  File "/home/klin_csparc/Cryosparc3/cryosparc_worker/cryosparc_compute/particles.py", line 34, in get_original_real_data
    data = self.blob.view()
  File "/home/klin_csparc/Cryosparc3/cryosparc_worker/cryosparc_compute/blobio/mrc.py", line 145, in view
    return self.get()
  File "/home/klin_csparc/Cryosparc3/cryosparc_worker/cryosparc_compute/blobio/mrc.py", line 140, in get
    _, data, total_time = prefetch.synchronous_native_read(self.fname, idx_start = self.page, idx_limit = self.page+1)
  File "cryosparc_master/cryosparc_compute/blobio/prefetch.py", line 82, in cryosparc_master.cryosparc_compute.blobio.prefetch.synchronous_native_read
OSError: 

IO request details:
Error ocurred (Invalid argument) at line 680 in mrc_readmic (1) 

The requested frame/particle cannot be accessed. The file may be corrupt, or there may be a mismatch between the file and its associated metadata (i.e. cryosparc .cs file).

filename:    /data07/rpiwowarcz/csparc/CS-20240712-y634-b779-go/J119/extract/CB6_3_day2_294-133_0005_-0.0_X-1Y+1-0_patch_aligned_doseweighted_particles.mrc
filetype:    0
header_only: 0
idx_start:   25
idx_limit:   26
eer_upsampfactor: 2
eer_numfractions: 40
num_threads: 6
buffer:      (nil)
buffer_sz:   0
nx, ny, nz:  0 0 0
dtype:       0
total_time:  -1.000000
io_time:     0.000000

Thanks @rpiwowarcz .
What is the output of the command

ls -l /ssd2/cryosparc_ssd/instance_skgpu08.rockefeller.edu:39001/store-v2/24/24bcdae37c56e09db19303992d79ffde8ffc0295

Did you also try running a clone of job J121 turning off the option Cache particle images on SSD?

Here’s the result of ls command:

rpiwowarcz@skgpu08:rpiwowarcz$ ls -l /ssd2/cryosparc_ssd/instance_skgpu08.rockefeller.edu:39001/store-v2/24/24bcdae37c56e09db19303992d79ffde8ffc0295
-rw-rw-r-- 1 klin_csparc klin 3917824 Jul 19 11:58 /ssd2/cryosparc_ssd/instance_skgpu08.rockefeller.edu:39001/store-v2/24/24bcdae37c56e09db19303992d79ffde8ffc0295

The second output was from a job that didn’t use SSD as a cache. By doing this, I was able to pinpoint which micrograph was the issue. Before extraction, I was joining two sets of particles and found that after joining and removing duplicates, a few micrographs were doubled (with different particle picks on each). My best guess is that extraction wasn’t able to handle that conflict and only extracted picks from the first micrograph. However, the later classification job expected more particles due to the doubled micrograph. Happy to provide more details if you think that would be helpful.

After removing doubled micrographs refinements work as expected.