2D Classification: ValueError: index is out of bounds for array

HI @mchakra,

Definitely, all data you choose to share with us will be kept confidential and used only for the purposes of reproducing this error in order to create a bug fix. I’ll send you a message with the credentials to our server to which you can SCP files to.

I see the same error occasionally on my CentOS system. Just saw it on a Class2D job now, which died in iteration 4. The error in both the GUI log and the joblog look similar:

[CPU: 5.12 GB]   Traceback (most recent call last):
  File "/home/exx/cryosparc/cryosparc_worker/cryosparc_compute/jobs/runcommon.py", line 1791, in run_with_except_hook
    run_old(*args, **kw)
  File "cryosparc_worker/cryosparc_compute/engine/cuda_core.py", line 131, in cryosparc_compute.engine.cuda_core.GPUThread.run
  File "cryosparc_worker/cryosparc_compute/engine/cuda_core.py", line 132, in cryosparc_compute.engine.cuda_core.GPUThread.run
  File "cryosparc_worker/cryosparc_compute/engine/engine.py", line 1108, in cryosparc_compute.engine.engine.process.work
  File "cryosparc_worker/cryosparc_compute/engine/engine.py", line 389, in cryosparc_compute.engine.engine.EngineThread.find_and_set_best_pose_shift
  File "<__array_function__ internals>", line 6, in unravel_index
ValueError: index 1059285798 is out of bounds for array with size 336

If I rerun the same job, even with different parameters (e.g smaller mask), it dies at the same iteration:

[CPU: 5.24 GB]   Traceback (most recent call last):
  File "/home/exx/cryosparc/cryosparc_worker/cryosparc_compute/jobs/runcommon.py", line 1791, in run_with_except_hook
    run_old(*args, **kw)
  File "cryosparc_worker/cryosparc_compute/engine/cuda_core.py", line 131, in cryosparc_compute.engine.cuda_core.GPUThread.run
  File "cryosparc_worker/cryosparc_compute/engine/cuda_core.py", line 132, in cryosparc_compute.engine.cuda_core.GPUThread.run
  File "cryosparc_worker/cryosparc_compute/engine/engine.py", line 1108, in cryosparc_compute.engine.engine.process.work
  File "cryosparc_worker/cryosparc_compute/engine/engine.py", line 389, in cryosparc_compute.engine.engine.EngineThread.find_and_set_best_pose_shift
  File "<__array_function__ internals>", line 6, in unravel_index
ValueError: index -1087200183 is out of bounds for array with size 336

Perhaps notably, earlier in the log I see this:

[CPU: 2.66 GB]   Iteration 4
[CPU: 2.66 GB]     -- Effective number of classes per image: min nan | 25-pct nan | median nan | 75-pct nan | max nan 
[CPU: 2.66 GB]     -- Probability of best class per image: min nan | 25-pct nan | median nan | 75-pct nan | max nan 

we are still seeing this error intermittently - @stephan, can we provide any useful data to help fix?

Hey @olibclarke,

Are you on the latest patch on v3.2.0? We fixed a bug caused by a similar reason, I wonder if it will help you here as well

I’m on the next to latest… ok will update to the very latest and see if it fixes the issue, thx!

Hey @olibclarke, @mchakra,

Have you experienced this issue while on the latest patch by any chance?

We are not on the latest patch, but currently using v3.2.0+210713, and do not seem to be encountering this issue.

Hi All,
I also encountered the same error during 2D classification (v3.2.0), which didn’t appear before (I am doing 2D classification on a dataset that used to work fine before). Any ideas what is the cause for this?
Best,
Daniel

Is there an easy fix to this yet? I see a similar error even on a cloned 2D classification job that worked fine once. I am using v3.3.2. Thanx!

[CPU: 7.99 GB]   Traceback (most recent call last):
  File "/home/cryosparc/cryosparc_worker/cryosparc_compute/jobs/runcommon.py", line 1811, in run_with_except_hook
    run_old(*args, **kw)
  File "cryosparc_worker/cryosparc_compute/engine/cuda_core.py", line 131, in cryosparc_compute.engine.cuda_core.GPUThread.run
  File "cryosparc_worker/cryosparc_compute/engine/cuda_core.py", line 132, in cryosparc_compute.engine.cuda_core.GPUThread.run
  File "cryosparc_worker/cryosparc_compute/engine/engine.py", line 1109, in cryosparc_compute.engine.engine.process.work
  File "cryosparc_worker/cryosparc_compute/engine/engine.py", line 390, in cryosparc_compute.engine.engine.EngineThread.find_and_set_best_pose_shift
  File "<__array_function__ internals>", line 6, in unravel_index
ValueError: index -1110553443 is out of bounds for array with size 336

@zalkr Did the error occur at the very beginning of the job, or was there any indication of “normal” activity in the Overview tab before the error occurred?
It may be worth investigating if job input has been corrupted either on cache or persistent storage.
If you observe the error in a clone of this job even when Cache particle images on SSD is off, you may check for “particle” corruption with the Check For Corrupt Particles job, with Check for NaN values enabled.
For the latest version of that job type, please apply the 220518 patch to your instance.

I guess some particles are at the edge of micrographs which likely cause the error. I wonder if cryosparc can exclude these particles?

@jianhaoc We are investigating the issue.

Hi all,
I am consistently seeing the same error when submitting 2D jobs - fails after 4-25 min. Any further insights?
/Anna

Welcome to the forum @ASL.
Please can you post

  • your CryoSPARC version
  • lines in the leading up to the traceback
  • the traceback
  • wether the error occurs when Cache particle images on SSD is disabled
  • whether you have run a Check For Corrupt Particles job, with Check for NaN values enabled.

Hi, I have been having recently this issue myself, initially thought it was caused by micrographs being assigned same UID and/or filenames for some reason, however I saw that sometimes 2D classifications can be restarted and might run to completion anyway and sometimes not.

I’m on v.4.4.1+240110, cache particle images on SSD is enabled, I have run multiple rounds of Check for Corrupt particles with and without Check for NaN values but job completed without finding any corrupt particles.

Traceback (most recent call last):
  File "/...../software/cryosparc/cryosparc_worker/cryosparc_compute/jobs/runcommon.py", line 2192, in run_with_except_hook
    run_old(*args, **kw)
  File "cryosparc_master/cryosparc_compute/gpu/gpucore.py", line 134, in cryosparc_master.cryosparc_compute.gpu.gpucore.GPUThread.run
  File "cryosparc_master/cryosparc_compute/gpu/gpucore.py", line 135, in cryosparc_master.cryosparc_compute.gpu.gpucore.GPUThread.run
  File "cryosparc_master/cryosparc_compute/jobs/class2D/newrun.py", line 632, in cryosparc_master.cryosparc_compute.jobs.class2D.newrun.class2D_engine_run.work
  File "cryosparc_master/cryosparc_compute/engine/newengine.py", line 1619, in cryosparc_master.cryosparc_compute.engine.newengine.EngineThread.find_best_pose_shift_class
  File "<__array_function__ internals>", line 5, in unravel_index
ValueError: index -1099971734 is out of bounds for array with size 336

Hope this helps! @wtempel

Thanks for this feedback.

Do you see

also when particle caching is disabled?

Yes also the same error when caching is disabled!

Hi, I’m also getting the same error here. 2D classification constantly gives error: File “cryosparc_master/cryosparc_compute/engine/newengine.py”, line 1619, in cryosparc_master.cryosparc_compute.engine.newengine.EngineThread.find_best_pose_shift_class
File “<array_function internals>”, line 5, in unravel_index
ValueError: index -4194304 is out of bounds for array with size 336. This is 2D jobs on Reference based Motion corrected particles. Any help is appreicated!

@hgxy15 We are not sure what is causing this issue. Another user linked a similar error to a faulty RAM module: Out of bounds error, 4.2.1CS, 2D, 2080TI - #3 by mulik52.

Hi @wtempel I was able to identify the problem following your previous suggestions. The error originated from a particle with NAN values. Thanks for the help

1 Like