Invalid entry in index array

Hi,
I met some problem when I tried to do 2D classification and 3D reconstruction. I used relion to pick and export particles. Then I imported the star and stacks into cryoSPARC successfully. When I start 2D classification or 3D reconstruction job, errors appear like this:
Traceback (most recent call last): File “cryosparc2_compute/jobs/runcommon.py”, line 738, in run_with_except_hook run_old(*args, **kw) File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 92, in cryosparc2_compute.engine.cuda_core.GPUThread.run File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 93, in cryosparc2_compute.engine.cuda_core.GPUThread.run File “cryosparc2_worker/cryosparc2_compute/engine/engine.py”, line 1059, in cryosparc2_compute.engine.engine.process.work File “cryosparc2_worker/cryosparc2_compute/engine/engine.py”, line 384, in cryosparc2_compute.engine.engine.EngineThread.find_and_set_best_pose_shift ValueError: invalid entry in index array

What’s the reason and how to solve it?
Many thanks

Hi All, I did the same (Import Particle Stack) and got the same error for 2D classification (tried v2.13.2 and v2.14.2):

Start of Iteration 0
[CPU: 3.09 GB] – DEV 0 THR 0 NUM 500 TOTAL 40.740437 ELAPSED 41.641017 –
[CPU: 7.03 GB] Traceback (most recent call last):
File “cryosparc2_compute/jobs/runcommon.py”, line 1547, in run_with_except_hook
run_old(*args, **kw)
File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 110, in cryosparc2_compute.engine.cuda_core.GPUThread.run
File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 111, in cryosparc2_compute.engine.cuda_core.GPUThread.run
File “cryosparc2_worker/cryosparc2_compute/engine/engine.py”, line 1072, in cryosparc2_compute.engine.engine.process.work
File “cryosparc2_worker/cryosparc2_compute/engine/engine.py”, line 392, in cryosparc2_compute.engine.engine.EngineThread.find_and_set_best_pose_shift
ValueError: invalid entry in index array.

Anyone can help? Thanks in advance!

Hi @donghuachen,

This is a difficult to diagnose error. Could you try:

  1. import the particle stack
  2. use the “Particle sets” tool to create a smaller subset of the particles, maybe half of the particle
  3. run 2D classification on the subset

Hi, I separated the dataset into two subsets. Then run 2D classification on one subset and still got the same problem (tried both v2.13.2 and v2.14.2).

Hi, I got the same problem. I was using v2.14. The first round 2D classification worked well. After selecting good particles, when I start the 2nd round 2D classification, all of sudden the error popped out. Problem persists with several tries. I then repeat the first round 2D classification, and got the same problem. I also tried upgrading to v2.15, but with no luck for the problem. I notice the problem comes out at different stages, sometimes at the very beginning of the job, sometimes during iterations (iteractions 0 to 2). Is it possible a problem with python or cuda, and any solution to it?

[CPU: 2.56 GB] Traceback (most recent call last):
File “cryosparc2_compute/jobs/runcommon.py”, line 1685, in run_with_except_hook
run_old(*args, **kw)
File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 110, in cryosparc2_compute.engine.cuda_core.GPUThread.run
File “cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py”, line 111, in cryosparc2_compute.engine.cuda_core.GPUThread.run
File “cryosparc2_worker/cryosparc2_compute/engine/engine.py”, line 1072, in cryosparc2_compute.engine.engine.process.work
File “cryosparc2_worker/cryosparc2_compute/engine/engine.py”, line 392, in cryosparc2_compute.engine.engine.EngineThread.find_and_set_best_pose_shift
ValueError: invalid entry in index array

Update: I am running Centos 7, using cuda-10.0 (also tried with 10.2). One thing I should mention, there is always an error during installation:

database: started
command_core: started

500 Internal Server Error

Internal Server Error

The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.

command_vis: started
command_proxy: started
webapp: started

Seems the problem is solely with 2D classification, not ab-initio reconstruction or other jobs.
One thing I notice in the job builder of a 2D classification, there use to be a “Circular mask diameter” and a “Circular mask diameter outer”, but now only “Circular mask diameter” visible. And in the latest try, I left the parameter blank and the job went through to the end without a problem. In previous tries, I always use 200 A for this parameter, but always crashes with the error “Invalid entry in index array”. Not sure what it means.

Many thanks,

I did search in the forum and made sure there was no zombie processes. I also tried reboot the computer, delete /tmp/cryosparc…sock, kill cryosparc processes found by "ps -ax | grep “supervisor/mongod”, then do the installation, though the error persists.

Seems the problem is solely with 2D classification, not ab-initio reconstruction or other jobs.
One thing I notice in the job builder of a 2D classification, there use to be a “Circular mask diameter” and a “Circular mask diameter outer”, but now only “Circular mask diameter” visible.

I ran " `cryosparcm log command_core" and found the following errors:

---------- Scheduler running ---------------
Jobs Queued: [(u’P2’, u’J137’)]
Licenses currently active : 0
Now trying to schedule J137
Need slots : {u’GPU’: 4, u’RAM’: 3, u’CPU’: 2}
Need fixed : {u’SSD’: True}
Master direct : False
Scheduling job to xxxxx
Failed to connect link: HTTP Error 502: Bad Gateway
Not a commercial instance - heartbeat set to 12 hours.
Launchable! – Launching.
Changed job P2.J137 status launched
Running project UID P2 job UID J137
Running job on worker type node
Running job using: /xx/cryosparc/cryosparc2_worker/bin/cryosparcw
---------- Scheduler finished ---------------
Changed job P2.J137 status started
Changed job P2.J137 status running
Changed job P2.J137 status failed

Many thanks,

It turned out to be a hardware problem.

Thanks for the update @huqi