TypeError: slice indices must be integers

All of the sudden I started getting the following error when running classification jobs. However, refinement jobs (homogeneous and heterogeneous) proceed just fine.

2D classification:

Traceback (most recent call last):
  File "/opt/packages/cryosparc/cryosparc-compute/sparcjob.py", line 359, in <module>
    cuda_devs = current_experiment_params['cuda_devs'])
  File "/opt/packages/cryosparc/cryosparc-compute/class2D.py", line 244, in abinit
    sigma_init_mean = n.sqrt(n.mean(sigproc.rotational_average(sigma2_init)[:radwn_max]))
TypeError: slice indices must be integers or None or have an __index__ method

Ab-initio:

Traceback (most recent call last):
  File "/opt/packages/cryosparc/cryosparc-compute/sparcjob.py", line 350, in <module>
    cuda_devs = current_experiment_params['cuda_devs'])
  File "/opt/packages/cryosparc/cryosparc-compute/abinit_fast.py", line 386, in abinit
    eps = 1.0/sigproc.rotational_average(H, n.round(radwn))[1:].max()
  File "/opt/packages/cryosparc/cryosparc-compute/sigproc.py", line 283, in rotational_average
    raps = n.bincount(ir, weights=(1-f)*M, minlength=maxRadius) + \
TypeError: 'numpy.float64' object cannot be interpreted as an index

I was running jobs last night and it was fine. Re-running those same jobs results in this error. Since jobs that worked before now fail, I don’t think it is a field in the input.

Any suggestions on what to try or knowledge of the cause of this error or how to trouble-shoot?

Thanks!

-Tom

This error start after the start of Iteration 0.

Would altering some of the system’s python (2.7.12) libraries have caused this?

I thought cryosparc used its own anaconda2 libraries for its purposes.

Hi @t.laughlin,

I believe this is caused if the version of numpy has changed. Can you check the numpy version that cryoSPARC will use by doing the following

eval $(cryosparc env)
python -c "import numpy; print numpy.__version__"

It should be 1.11.2.
CryoSPARC does use it’s own python installation but if you have the PYTHONPATH set, there can still be accidental linking to packages that were installed system wide or for other software packages (EMAN2 etc)

Ah! Thank you!
The system-wide install is 1.13.1.
The PYTHONPATH is unset at the moment though.

How do I know which numpy CryoSPARC is utilizing?

No problem :slight_smile:
You can see where the module is loaded from by doing

eval $(cryosparc env)
python -c "import numpy; print numpy.__file__"

(btw the eval line ensures that you’re loading python from the cryosparc installation - you can check with which python to make sure)
If it does turn out that somehow the numpy version in the cryosparc installation has been updated, you should be able to do

eval $(cryosparc env)
conda install numpy=1.11.2

Hopefully this will work smoothly, but if you get dependancy resolution issues let me know.

Ali

Success so far on the previously failed ab-initio!

Great :slight_smile: let us know if any further issues.

Hi Ali

I have the same problem but couldn’t sort this out.
In my case, is python numpy in cryosparc already 1.13.3?
Can you suggest some solutions?

Hun

$ eval $(cryosparc env)
$ python -c “import numpy; print numpy.version
1.13.3

$ eval $(cryosparc env)
$ python -c “import numpy; print numpy.file
/home/.local/lib/python2.7/site-packages/numpy/init.pyc

$ which python
/home/cryosparc/anaconda2/bin/python

$ eval $(cryosparc env)
$ conda install numpy=1.11.2

Fetching package metadata …
Solving package specifications: .

All requested packages already installed.
packages in environment at /home/cryosparc/anaconda2:

numpy 1.11.2 py27_0

$ /home/cryosparc/anaconda2/bin/python
Python 2.7.14 |Anaconda, Inc.| (default, Oct 16 2017, 17:29:19)
[GCC 7.2.0] on linux2
Type “help”, “copyright”, “credits” or “license” for more information.

import numpy
numpy.version.version
‘1.13.3’

Hi @soung,

Thanks for the clear description of the problem.
From the paths it looks like even though cryosparc is using the correct python (/home/cryosparc/anaconda2/bin/python), modules are being preferentially loaded from a different location (/home/.local/lib/python2.7/site-packages). All the python modules installed in the cryosparc python installation should be in /home/cryosparc/anaconda2.

Can you check if your $PYTHONPATH is being set somehome? There are some python EM packages that change your shell startup files and set the $PYTHONPATH directly, which can lead to this kind of problem.

Ali

For reference, this looks like a clear description of the problem:

Probably not worth reading the whole thing, but one suggested solution that may work for you (assuming your $PYTHONPATH being set was not the issue) is to add this line to the top of your /home/cryosparc/config.sh file:

export PYTHONNOUSERSITE=true

Note that eval $(cryosparc env) will not cause this to be set in a new shell, so you should directly set this in a shell if you are going to try to use the cryosparc python. If you do export the variable in a shell and then do

eval $(cryosparc env)
python -c "import numpy; print numpy.version"

Then you should get the correct 1.11.2 version.

Let me know if this works.

Worked excellent for me, at least. Thank you!

Hi all,

Unfortunately this solution has not worked for me. I have a cryosparc installation on Ubuntu 16.04LTS. I persistently get the

  Traceback (most recent call last):
  File "/home/sparc/cryosparc/cryosparc-compute/sparcjob.py", line 539, in <module>
    cuda_devs = current_experiment_params['cuda_devs'])
  File "/home/sparc/cryosparc/cryosparc-compute/class2D.py", line 244, in abinit
    sigma_init_mean = n.sqrt(n.mean(sigproc.rotational_average(sigma2_init)[:radwn_max]))
  TypeError: slice indices must be integers or None or have an __index__ method

error. Initially cryosparc was installed on my account; in the process of trouble-shooting I created a new user account to hold it so there would be no conflicts in my .local directory.

I included
export PYTHONNOUSERSITE=true
in my cryosparc/config.sh script.

If I run:
python -c 'import numpy; print numpy.__version__,numpy.__file__'
upon login I get:
1.13.3 /usr/local/lib/python2.7/dist-packages/numpy/__init__.py

If I then eval $(cryosparc env) and run the same command I get:
1.11.2 /home/sparc/cryosparc/anaconda2/lib/python2.7/site-packages/numpy/__init__.pyc
so all looks good.

However I still get the numpy error associated with post-1.11 indexing using non-integers.

After a lot of messing around with PYTHONPATHs and pip and getting nowhere, I edited the __init__.py file in the three numpy modules on my system: the global/usr/local/lib package, the cryosparc/anaconda2/lib/python2.7 package and the.local/lib/python2.7 package to include a print("Version " % version) command at the end (it is not possible to edit the cryosparc .py files in this way without breaking the license)

When I then executed 2D classification from the web interface, and looked at the output in the relevant cryosparc/run/sparcjobs/xxxxxxxxxxxxxxxx/stdout file, it was the global 1.13.3 package that was being accessed, not the anaconda (or indeed ~/.local/lib) 1.11.2 package.

Any suggestions would be more than welcome!

Thanks,
James