2D Classification: ValueError: index is out of bounds for array

I manually picked 200 particles and tried to run a 2D classification job, but the job fails before even a single iteration is completed. All parameters are set to their default, other than the request to classify into 10 classes. Below is the output:

As similar index is out of bounds GPU-related errors have been reported in other posts, I will state that the workstation being used is running CentOS7. Below is the ouput of nvidia-smi:

image

I also checked the output of cryosparcm joblog for the failed job, which is not very informative:

image

Two interesting observations:

  1. In the same project where this failed job occurred, I ran two other 2D classification jobs using the output of blob picker (containing several hundred thousand particles). Both jobs completed successfully, although some runtime warnings were present in the joblog.

  2. I tried to run this same 2D classification job multiple times, and the job does not always fail before the first iteration. In most cases it fails after iteration 1 with the above error, but in one instance it processed up to iteration 11 before failing with the above error.

Any advice on resolving this issue would be appreciated. Both Master and worker are running v3.2.0+210413.

Hi @mchakra,

Thanks for reporting and providing all this information. We’d like to try to reproduce this ourselves on our systems- do you think you can share the 200 particles (based on the screenshot, they’re scattered across 23 MRC files) and their corresponding .cs file? You can get them by going to the Manual Picker job, navigating to the “Output” tab, and clicking on “Export” under the particles output result group. This will create a new folder inside the project folder with the particle MRC files, the micrograph MRC files and their corresponding .cs and .csg files. You’ll find the full path at the bottom of the “Overview” tab.
If you’re okay to share your data, let me know, and I’ll send you some details so you can get those over to me.

Hi @stephan,

Thank you for responding and offering to look into this. I think it should be find to share the data with you, as long as it is used only for error reproduction purposes. I was able to export the data into the folder as you described. Please let me know how I should send them to you.

HI @mchakra,

Definitely, all data you choose to share with us will be kept confidential and used only for the purposes of reproducing this error in order to create a bug fix. I’ll send you a message with the credentials to our server to which you can SCP files to.