Patch CTF estimation error in cryosparc v3.1.0

I am seeing a similar error (also V3.1.0) on a data set that worked fine before in 2.15. Switched to CTFFIND4 and all processed fine.

[CPU: 723.9 MB] Traceback (most recent call last):
File “/home/cryosparc_user/software/cryosparc2_worker/cryosparc_compute/jobs/runcommon.py”, line 1726, in run_with_except_hook
run_old(*args, **kw)
File “/home/cryosparc_user/software/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/threading.py”, line 870, in run
self._target(*self._args, **self._kwargs)
File “/home/cryosparc_user/software/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 186, in thread_work
work = processor.exec(item)
File “/home/cryosparc_user/software/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 43, in exec
return self.process(item)
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 112, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 118, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 119, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 71, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 756, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 621, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217.EPA_suppress_past_k2k4crossover_point
IndexError: index -1 is out of bounds for axis 0 with size 0

[CPU: 718.7 MB] Traceback (most recent call last):
File “/home/cryosparc_user/software/cryosparc2_worker/cryosparc_compute/jobs/runcommon.py”, line 1726, in run_with_except_hook
run_old(*args, **kw)
File “/home/cryosparc_user/software/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/threading.py”, line 870, in run
self._target(*self._args, **self._kwargs)
File “/home/cryosparc_user/software/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 186, in thread_work
work = processor.exec(item)
File “/home/cryosparc_user/software/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 43, in exec
return self.process(item)
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 209, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
AssertionError: Job is not in running state - worker thread with PID 27237 terminating self.

[CPU: 236.0 MB] Outputting partial results now…

[CPU: 234.7 MB] Traceback (most recent call last):
File “cryosparc_worker/cryosparc_compute/run.py”, line 84, in cryosparc_compute.run.main
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 256, in cryosparc_compute.jobs.ctf_estimation.run.run
AssertionError: Child process with PID 27237 has terminated unexpectedly!

I am experiencing a similar error:

[CPU: 862.0 MB] Traceback (most recent call last):
File “/apps/user/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/runcommon.py”, line 1726, in run_with_except_hook
run_old(*args, **kw)
File “/apps/user/cryosparc/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/threading.py”, line 870, in run
self._target(*self._args, **self._kwargs)
File “/apps/user/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 186, in thread_work
work = processor.exec(item)
File “/apps/user/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 43, in exec
return self.process(item)
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 112, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 118, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 119, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 71, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 756, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 621, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217.EPA_suppress_past_k2k4crossover_point
IndexError: index -1 is out of bounds for axis 0 with size 0
[CPU: 410.1 MB] Outputting partial results now…
[CPU: 410.2 MB] Traceback (most recent call last):
File “cryosparc_worker/cryosparc_compute/run.py”, line 84, in cryosparc_compute.run.main
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 256, in cryosparc_compute.jobs.ctf_estimation.run.run
AssertionError: Child process with PID 29647 has terminated unexpectedly!

Hello, there is no answer for this? In this thread and the other one with the same problem there are seven people with the same problem @apunjani

Same issue here! The process hangs before the end of the CTF estimation however many micrographs you may have. I tried changing from 1000, to 100, to 10, to 1, and all the times it crashes after an unusually long “process”.

Hi everyone,

This definitely sounds like a bug in the new algorithm, thanks for bringing it to our attention. I’ll look into it and reply with more information or additional questions soon.

Harris

1 Like

Also @Adrian, @stavros, @jezsurf, @MHB, @igor can you try running the job with the “Classic mode” switch on and see if it runs through completely?

2 Likes

Hi all,

actually I already did it and it finishes with no problems

Igor

1 Like

After switching to “Classic mode” the jobs complete without any errors.

2 Likes

Same here. Works fine with the classic mode toggle on. Also seems to be data set dependent. A separate data set worked fine without the toggle.

1 Like

Dear all, a patch is now available which fixes this issue. Please see: Patch 210216 available for v3.1.0 - Fixes CPU extraction bug and Patch CTF bug

1 Like

After running the patch,

The job failed after 100 micrographs. It only work on first 100 micrographs. Please see all error messages below:

[CPU: 616.3 MB] Traceback (most recent call last): File “/cryosparc_3_1_0/cryosparc_worker/cryosparc_compute/jobs/runcommon.py”, line 1726, in run_with_except_hook run_old(*args, **kw) File “/cryosparc_3_1_0/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/threading.py”, line 870, in run self._target(*self._args, **self._kwargs) File “/cryosparc_3_1_0/cryosparc_worker/cryosparc_compute/jobs/pipeline.py”, line 186, in thread_work work = processor.exec(item) File “/cryosparc_3_1_0/cryosparc_worker/cryosparc_compute/jobs/pipeline.py”, line 43, in exec return self.process(item) File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 112, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 118, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 119, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process File “cryosparc_master/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 71, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217 File “cryosparc_master/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 840, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217 File “cryosparc_master/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 789, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217.compute_cc_fit IndexError: index 1 is out of bounds for axis 0 with size 1

[CPU: 733.1 MB] Traceback (most recent call last): File “/cryosparc_3_1_0/cryosparc_worker/cryosparc_compute/jobs/runcommon.py”, line 1726, in run_with_except_hook run_old(*args, **kw) File “/cryosparc_3_1_0/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/threading.py”, line 870, in run self._target(*self._args, **self._kwargs) File “/cryosparc_3_1_0/cryosparc_worker/cryosparc_compute/jobs/pipeline.py”, line 186, in thread_work work = processor.exec(item) File “/cryosparc_3_1_0/cryosparc_worker/cryosparc_compute/jobs/pipeline.py”, line 43, in exec return self.process(item) File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 209, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process AssertionError: Job is not in running state - worker thread with PID 19785 terminating self.

[CPU: 729.5 MB] Traceback (most recent call last): File “/cryosparc_3_1_0/cryosparc_worker/cryosparc_compute/jobs/runcommon.py”, line 1726, in run_with_except_hook run_old(*args, **kw) File “/cryosparc_3_1_0/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/threading.py”, line 870, in run self._target(*self._args, **self._kwargs) File “/cryosparc_3_1_0/cryosparc_worker/cryosparc_compute/jobs/pipeline.py”, line 186, in thread_work work = processor.exec(item) File “/cryosparc_3_1_0/cryosparc_worker/cryosparc_compute/jobs/pipeline.py”, line 43, in exec return self.process(item) File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 209, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process AssertionError: Job is not in running state - worker thread with PID 19786 terminating self.

[CPU: 727.9 MB] Traceback (most recent call last): File “/cryosparc_3_1_0/cryosparc_worker/cryosparc_compute/jobs/runcommon.py”, line 1726, in run_with_except_hook run_old(*args, **kw) File “/cryosparc_3_1_0/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/threading.py”, line 870, in run self._target(*self._args, **self._kwargs) File “/cryosparc_3_1_0/cryosparc_worker/cryosparc_compute/jobs/pipeline.py”, line 186, in thread_work work = processor.exec(item) File “/cryosparc_3_1_0/cryosparc_worker/cryosparc_compute/jobs/pipeline.py”, line 43, in exec return self.process(item) File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 209, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process AssertionError: Job is not in running state - worker thread with PID 19784 terminating self.

[CPU: 389.8 MB] Outputting partial results now…

[CPU: 387.1 MB] Traceback (most recent call last): File “cryosparc_worker/cryosparc_compute/run.py”, line 84, in cryosparc_compute.run.main File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 256, in cryosparc_compute.jobs.ctf_estimation.run.run AssertionError: Child process with PID 19783 has terminated unexpectedly!

Hi @jiskander,

I have sent you a private message about this

-Harris

1 Like

Hi, I’m trying to run a Patch CTF estimation and get the following error. I tried running in classic mode but the problem still exists
License is valid.

Launching job on lane default target bmbcryoem.wustl.edu …

Running job on master node hostname bmbcryoem.wustl.edu

[CPU: 85.5 MB] Project P41 Job J16 Started

[CPU: 85.7 MB] Master running v3.1.0, worker running v3.1.0

[CPU: 85.9 MB] Running on lane default

[CPU: 85.9 MB] Resources allocated:

[CPU: 85.9 MB] Worker: bmbcryoem.wustl.edu

[CPU: 85.9 MB] CPU : [0, 1, 2, 3, 4, 5, 6, 7]

[CPU: 85.9 MB] GPU : [0, 1, 2, 3]

[CPU: 85.9 MB] RAM : [0]

[CPU: 85.9 MB] SSD : False

[CPU: 85.9 MB] --------------------------------------------------------------

[CPU: 85.9 MB] Importing job module for job type patch_ctf_estimation_multi…

[CPU: 197.6 MB] Job ready to run

[CPU: 197.6 MB] ***************************************************************

[CPU: 201.0 MB] Job will process this many micrographs: 2010

[CPU: 201.3 MB] parent process is 19950

[CPU: 150.2 MB] Calling CUDA init from 20296

[CPU: 150.1 MB] Calling CUDA init from 20298

[CPU: 150.1 MB] Calling CUDA init from 20297

[CPU: 150.1 MB] Calling CUDA init from 20300

[CPU: 303.8 MB] – 3.0: processing 3 of 2010: J6/motioncorrected/009977155230713333884_FoilHole_4376691_Data_4376061_4376063_20210205_142142_Fractions_patch_aligned.mrc
loading /data/cryosparc/projects/chaddaa/P41/J6/motioncorrected/009977155230713333884_FoilHole_4376691_Data_4376061_4376063_20210205_142142_Fractions_patch_aligned.mrc
Loading raw mic data from J6/motioncorrected/009977155230713333884_FoilHole_4376691_Data_4376061_4376063_20210205_142142_Fractions_patch_aligned.mrc …
Done in 0.05s
Processing …

[CPU: 303.8 MB] – 1.0: processing 1 of 2010: J6/motioncorrected/001593063183511890750_FoilHole_4376688_Data_4376061_4376063_20210205_141928_Fractions_patch_aligned.mrc
loading /data/cryosparc/projects/chaddaa/P41/J6/motioncorrected/001593063183511890750_FoilHole_4376688_Data_4376061_4376063_20210205_141928_Fractions_patch_aligned.mrc
Loading raw mic data from J6/motioncorrected/001593063183511890750_FoilHole_4376688_Data_4376061_4376063_20210205_141928_Fractions_patch_aligned.mrc …
Done in 0.05s
Processing …

[CPU: 303.8 MB] – 0.0: processing 0 of 2010: J6/motioncorrected/011081774107919922061_FoilHole_4376687_Data_4376061_4376063_20210205_141824_Fractions_patch_aligned.mrc
loading /data/cryosparc/projects/chaddaa/P41/J6/motioncorrected/011081774107919922061_FoilHole_4376687_Data_4376061_4376063_20210205_141824_Fractions_patch_aligned.mrc
Loading raw mic data from J6/motioncorrected/011081774107919922061_FoilHole_4376687_Data_4376061_4376063_20210205_141824_Fractions_patch_aligned.mrc …
Done in 0.05s
Processing …

[CPU: 303.8 MB] – 2.0: processing 2 of 2010: J6/motioncorrected/001103279965827597086_FoilHole_4376690_Data_4376061_4376063_20210205_142036_Fractions_patch_aligned.mrc
loading /data/cryosparc/projects/chaddaa/P41/J6/motioncorrected/001103279965827597086_FoilHole_4376690_Data_4376061_4376063_20210205_142036_Fractions_patch_aligned.mrc
Loading raw mic data from J6/motioncorrected/001103279965827597086_FoilHole_4376690_Data_4376061_4376063_20210205_142036_Fractions_patch_aligned.mrc …
Done in 0.05s
Processing …

[CPU: 426.7 MB] Traceback (most recent call last):
File “/data/cryosparc/software/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/runcommon.py”, line 1726, in run_with_except_hook
run_old(*args, **kw)
File “/data/cryosparc/software/cryosparc/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/threading.py”, line 870, in run
self._target(*self._args, **self._kwargs)
File “/data/cryosparc/software/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 186, in thread_work
work = processor.exec(item)
File “/data/cryosparc/software/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 43, in exec
return self.process(item)
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 112, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 118, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 119, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 1009, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_classic
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 1059, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_classic
File “cryosparc_worker/cryosparc_compute/engine/cuda_core.py”, line 339, in cryosparc_compute.engine.cuda_core.EngineBaseThread.ensure_allocated
File “/data/cryosparc/software/cryosparc/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/gpuarray.py”, line 210, in init
self.gpudata = self.allocator(self.size * self.dtype.itemsize)
pycuda._driver.MemoryError: cuMemAlloc failed: out of memory

[CPU: 426.4 MB] Traceback (most recent call last):
File “/data/cryosparc/software/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/runcommon.py”, line 1726, in run_with_except_hook
run_old(*args, **kw)
File “/data/cryosparc/software/cryosparc/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/threading.py”, line 870, in run
self._target(*self._args, **self._kwargs)
File “/data/cryosparc/software/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 186, in thread_work
work = processor.exec(item)
File “/data/cryosparc/software/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 43, in exec
return self.process(item)
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 112, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 118, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 119, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 1009, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_classic
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 1059, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_classic
File “cryosparc_worker/cryosparc_compute/engine/cuda_core.py”, line 339, in cryosparc_compute.engine.cuda_core.EngineBaseThread.ensure_allocated
File “/data/cryosparc/software/cryosparc/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/gpuarray.py”, line 210, in init
self.gpudata = self.allocator(self.size * self.dtype.itemsize)
pycuda._driver.MemoryError: cuMemAlloc failed: out of memory

[CPU: 426.6 MB] Traceback (most recent call last):
File “/data/cryosparc/software/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/runcommon.py”, line 1726, in run_with_except_hook
run_old(*args, **kw)
File “/data/cryosparc/software/cryosparc/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/threading.py”, line 870, in run
self._target(*self._args, **self._kwargs)
File “/data/cryosparc/software/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 186, in thread_work
work = processor.exec(item)
File “/data/cryosparc/software/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 43, in exec
return self.process(item)
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 112, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 118, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 119, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 1009, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_classic
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 1059, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_classic
File “cryosparc_worker/cryosparc_compute/engine/cuda_core.py”, line 339, in cryosparc_compute.engine.cuda_core.EngineBaseThread.ensure_allocated
File “/data/cryosparc/software/cryosparc/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/gpuarray.py”, line 210, in init
self.gpudata = self.allocator(self.size * self.dtype.itemsize)
pycuda._driver.MemoryError: cuMemAlloc failed: out of memory

[CPU: 426.7 MB] Traceback (most recent call last):
File “/data/cryosparc/software/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/runcommon.py”, line 1726, in run_with_except_hook
run_old(*args, **kw)
File “/data/cryosparc/software/cryosparc/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/threading.py”, line 870, in run
self._target(*self._args, **self._kwargs)
File “/data/cryosparc/software/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 186, in thread_work
work = processor.exec(item)
File “/data/cryosparc/software/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 43, in exec
return self.process(item)
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 112, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 118, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 119, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 1009, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_classic
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 1059, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_classic
File “cryosparc_worker/cryosparc_compute/engine/cuda_core.py”, line 339, in cryosparc_compute.engine.cuda_core.EngineBaseThread.ensure_allocated
File “/data/cryosparc/software/cryosparc/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/gpuarray.py”, line 210, in init
self.gpudata = self.allocator(self.size * self.dtype.itemsize)
pycuda._driver.MemoryError: cuMemAlloc failed: out of memory

Hi @Ankita7,

What model of gpu are you using? it sounds like you might not have enough VRAM.

Harris

Hi Harris,

I’m using geforce gtx 1080 ti 11gb.

Thanks

Thank you for this thread, I am also experiencing the same error, I have restarted the job in classic mode.

Azara

We pulled the recent version I thought it had the patch for this issue, but I’m having the same problem with the patch CTF failing with the error reported, and switching to classic mode runs the data up with no problem. Is there a different way to get the patch?

Hi @hansenbry, please see this post for instructions on how to apply the patch, it is a separate process from the normal update: Patch 210216 available for v3.1.0 - Fixes CPU extraction bug and Patch CTF bug
Thanks!

@spunjani Thanks so much. I figured there was some type of documentation like this that I was just missing!

1 Like

Hi Cryosparc-team!

some months ago I had some issues running Patch CTF estimation after updating to v3.1.0. It looked that the problem was solved after installing the patch 210216, however I am experiencing again a similar issue (which again I am not encountering when I run it in the classic mode):

[CPU: 570.6 MB] Traceback (most recent call last):
File “/mnt/5977bbc6-b464-440c-b12a-b42e8e335565/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/runcommon.py”, line 1726, in run_with_except_hook
run_old(*args, **kw)
File “/mnt/5977bbc6-b464-440c-b12a-b42e8e335565/cryosparc/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/threading.py”, line 870, in run
self._target(*self._args, **self._kwargs)
File “/mnt/5977bbc6-b464-440c-b12a-b42e8e335565/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 186, in thread_work
work = processor.exec(item)
File “/mnt/5977bbc6-b464-440c-b12a-b42e8e335565/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 43, in exec
return self.process(item)
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 112, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 118, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 119, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_master/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 71, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217
File “cryosparc_master/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 840, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217
File “cryosparc_master/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 788, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217.compute_cc_fit
IndexError: index 0 is out of bounds for axis 0 with size 0

[CPU: 573.9 MB] Traceback (most recent call last):
File “/mnt/5977bbc6-b464-440c-b12a-b42e8e335565/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/runcommon.py”, line 1726, in run_with_except_hook
run_old(*args, **kw)
File “/mnt/5977bbc6-b464-440c-b12a-b42e8e335565/cryosparc/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/threading.py”, line 870, in run
self._target(*self._args, **self._kwargs)
File “/mnt/5977bbc6-b464-440c-b12a-b42e8e335565/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 186, in thread_work
work = processor.exec(item)
File “/mnt/5977bbc6-b464-440c-b12a-b42e8e335565/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 43, in exec
return self.process(item)
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 209, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
AssertionError: Job is not in running state - worker thread with PID 153984 terminating self.

[CPU: 573.3 MB] Traceback (most recent call last):
File “/mnt/5977bbc6-b464-440c-b12a-b42e8e335565/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/runcommon.py”, line 1726, in run_with_except_hook
run_old(*args, **kw)
File “/mnt/5977bbc6-b464-440c-b12a-b42e8e335565/cryosparc/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/threading.py”, line 870, in run
self._target(*self._args, **self._kwargs)
File “/mnt/5977bbc6-b464-440c-b12a-b42e8e335565/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 186, in thread_work
work = processor.exec(item)
File “/mnt/5977bbc6-b464-440c-b12a-b42e8e335565/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 43, in exec
return self.process(item)
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 209, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
AssertionError: Job is not in running state - worker thread with PID 153985 terminating self.

[CPU: 558.2 MB] Traceback (most recent call last):
File “/mnt/5977bbc6-b464-440c-b12a-b42e8e335565/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/runcommon.py”, line 1726, in run_with_except_hook
run_old(*args, **kw)
File “/mnt/5977bbc6-b464-440c-b12a-b42e8e335565/cryosparc/cryosparc2_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/threading.py”, line 870, in run
self._target(*self._args, **self._kwargs)
File “/mnt/5977bbc6-b464-440c-b12a-b42e8e335565/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 186, in thread_work
work = processor.exec(item)
File “/mnt/5977bbc6-b464-440c-b12a-b42e8e335565/cryosparc/cryosparc2_worker/cryosparc_compute/jobs/pipeline.py”, line 43, in exec
return self.process(item)
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 209, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
AssertionError: Job is not in running state - worker thread with PID 153983 terminating self.

[CPU: 407.4 MB] Outputting partial results now…
[CPU: 396.9 MB] Traceback (most recent call last):
File “cryosparc_worker/cryosparc_compute/run.py”, line 84, in cryosparc_compute.run.main
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 256, in cryosparc_compute.jobs.ctf_estimation.run.run
AssertionError: Child process with PID 153982 has terminated unexpectedly!

Any feedback?

Thanks in advance,
Igor