Traceback (most recent call last):
File “/home/user/Data/Software/cryosparc/cryosparc_worker/cryosparc_compute/jobs/runcommon.py”, line 1790, in run_with_except_hook
run_old(*args, **kw)
File “cryosparc_worker/cryosparc_compute/engine/cuda_core.py”, line 131, in cryosparc_compute.engine.cuda_core.GPUThread.run
File “cryosparc_worker/cryosparc_compute/engine/cuda_core.py”, line 132, in cryosparc_compute.engine.cuda_core.GPUThread.run
File “cryosparc_worker/cryosparc_compute/engine/engine.py”, line 1108, in cryosparc_compute.engine.engine.process.work
File “cryosparc_worker/cryosparc_compute/engine/engine.py”, line 389, in cryosparc_compute.engine.engine.EngineThread.find_and_set_best_pose_shift
File “<array_function internals>”, line 6, in unravel_index
ValueError: index 1065353216 is out of bounds for array with size 5376
Hello Oli @olibclarke,
No I didn’t use the latest patch.
In fact I am a bit suspicious about my input dataset - it is Bayesian polished.
When I run it again I received the following error: Traceback (most recent call last):
** File “cryosparc_worker/cryosparc_compute/run.py”, line 84, in cryosparc_compute.run.main**
** File “cryosparc_worker/cryosparc_compute/jobs/refine/newrun.py”, line 447, in cryosparc_compute.jobs.refine.newrun.run_homo_refine**
** File “cryosparc_worker/cryosparc_compute/jobs/refine/newrun.py”, line 448, in cryosparc_compute.jobs.refine.newrun.run_homo_refine**
** File “cryosparc_worker/cryosparc_compute/jobs/ctf_refinement/run_local.py”, line 166, in cryosparc_compute.jobs.ctf_refinement.run_local.full_defocus_refine**
** File “cryosparc_worker/cryosparc_compute/jobs/ctf_refinement/run_local.py”, line 325, in cryosparc_compute.jobs.ctf_refinement.run_local.do_defocus_refine**
** File “cryosparc_worker/cryosparc_compute/engine/newengine.py”, line 860, in cryosparc_compute.engine.newengine.EngineThread.project_model**
** File “cryosparc_worker/cryosparc_compute/engine/cuda_core.py”, line 309, in cryosparc_compute.engine.cuda_core.EngineBaseThread.toc**
** File “cryosparc_worker/cryosparc_compute/engine/cuda_core.py”, line 305, in cryosparc_compute.engine.cuda_core.EngineBaseThread.wait** pycuda._driver.LogicError: cuStreamSynchronize failed: an illegal memory access was encountered
And I get before this error strange empty images for the iteration (please see below)
Have you perhaps the idea what may be wrong?
I disable it but still the problem remains.
Not sure what is causing it.
I tried to run the 2D classification with the same dataset, to collect more evidences.
The 2D failed with the following error:
Traceback (most recent call last):
File “/home/user/Data/Software/cryosparc/cryosparc_worker/cryosparc_compute/jobs/runcommon.py”, line 1790, in run_with_except_hook
run_old(*args, **kw)
File “cryosparc_worker/cryosparc_compute/engine/cuda_core.py”, line 131, in cryosparc_compute.engine.cuda_core.GPUThread.run
File “cryosparc_worker/cryosparc_compute/engine/cuda_core.py”, line 132, in cryosparc_compute.engine.cuda_core.GPUThread.run
File “cryosparc_worker/cryosparc_compute/engine/engine.py”, line 1108, in cryosparc_compute.engine.engine.process.work
File “cryosparc_worker/cryosparc_compute/engine/engine.py”, line 389, in cryosparc_compute.engine.engine.EngineThread.find_and_set_best_pose_shift
File “<array_function internals>”, line 6, in unravel_index
ValueError: index 1072614080 is out of bounds for array with size 336
Hi @Dmitry
I got a same error when using Bayesian polished particles to do Non Uniform Refinement. Could you give me a more clear instruction about how you fixed this problem? Thanks!
Sincerely,
duyx
I didn’t find a clear reason for that error.
My guess is that the issue is connected with the box size you use and the ram memory your device has.
Try to re-extract the particles with the smaller box size and repeat the 3D refinement.