Error in CTF estimation

Hello,
I got following error for “Patch CTF estimation (multi)”. Do I have some micrographs that are not same size as last comment in error file motioned that “total size of new array must be unchanged” ?

This error pop up after processing of 10,000 micrographs.

[CPU: 1.02 GB] Traceback (most recent call last):
File “cryosparc2_compute/jobs/runcommon.py”, line 1685, in run_with_except_hook
run_old(*args, **kw)
File “/home/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/threading.py”, line 754, in run
self.__target(*self.__args, **self.__kwargs)
File “cryosparc2_compute/jobs/pipeline.py”, line 165, in thread_work
work = processor.process(item)
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/run.py”, line 108, in cryosparc2_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/run.py”, line 111, in cryosparc2_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc2_compute/blobio/mrc.py”, line 135, in read_mrc
data = read_mrc_data(file_obj, header, start_page, end_page, out)
File “cryosparc2_compute/blobio/mrc.py”, line 98, in read_mrc_data
data = n.fromfile(file_obj, dtype=dtype, count= num_pages * ny * nx).reshape(num_pages, ny, nx)
ValueError: total size of new array must be unchanged

Usually this means that the file being processed is truncated. If the header says to expect 40 frames (for example, cryosparc will try to read 40 frames. If the file contains less than that, this error will occur. Try removing the offending file and re-running the job. It might help to have a look at the size of all the files in that data set, and if you see any that are smaller than they should be, remove them too.

Harris

These are motioncor dose weighted micrographs. I tried to sort them based on size of files and found one odd number so deleted that file but still have same error.

[CPU: 269.9 MB] --------------------------------------------------------------

[CPU: 269.9 MB] Processed 0 of 10651 micrographs in 15.02s

[CPU: 269.9 MB] Outputting partial results now…

[CPU: 258.2 MB] Traceback (most recent call last):
File “cryosparc2_worker/cryosparc2_compute/run.py”, line 85, in cryosparc2_compute.run.main
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/run.py”, line 258, in cryosparc2_compute.jobs.ctf_estimation.run.run
AssertionError: Child process with PID 74635 has terminated unexpectedly!

Hi,

Could you post the output of “cryosparcm joblog Pxx Jyy” where xx and yy are the project and job number, respectively? The second error you posted could be a different problem from before.

-Harris

I got a similar error when I test run using the dataset from EMPIAR-10146. Import and motion correct job run successfully, but got error when running “patch CTF estimation (multi)”
After processing a few images,then an error came up
Traceback (most recent call last):
File “/opt/cryosparc/cryosparc_worker/cryosparc_compute/jobs/runcommon.py”, line 1726, in run_with_except_hook
run_old(*args, **kw)
File “/opt/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/threading.py”, line 870, in run
self._target(*self._args, **self._kwargs)
File “/opt/cryosparc/cryosparc_worker/cryosparc_compute/jobs/pipeline.py”, line 186, in thread_work
work = processor.exec(item)
File “/opt/cryosparc/cryosparc_worker/cryosparc_compute/jobs/pipeline.py”, line 43, in exec
return self.process(item)
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 112, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 118, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 119, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 71, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 832, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 782, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217.compute_cc_fit
ValueError: cannot convert float NaN to integer

cryosparcm joblog output was following

Project P4 Job J3
Master SPGPU Port 39002

========= monitor process now starting main process
MAINPROCESS PID 3018347
========= monitor process now waiting for main process
MAIN PID 3018347
ctf_estimation.run cryosparc_compute.jobs.jobregister


Running job on hostname %s SPGPU
Allocated Resources : {‘fixed’: {‘SSD’: False}, ‘hostname’: ‘SPGPU’, ‘lane’: ‘default’, ‘lane_type’: ‘default’, ‘license’: True, ‘licenses_acquired’: 4, ‘slots’: {‘CPU’: [0, 1, 2, 3, 4, 5, 6, 7], ‘GPU’: [0, 1, 2, 3], ‘RAM’: [0]}, ‘target’: {‘cache_path’: ‘/ssd/cryosparc_cache’, ‘cache_quota_mb’: None, ‘cache_reserve_mb’: 10000, ‘desc’: None, ‘gpus’: [{‘id’: 0, ‘mem’: 25447170048, ‘name’: ‘GeForce RTX 3090’}, {‘id’: 1, ‘mem’: 25445793792, ‘name’: ‘GeForce RTX 3090’}, {‘id’: 2, ‘mem’: 25447170048, ‘name’: ‘GeForce RTX 3090’}, {‘id’: 3, ‘mem’: 25447170048, ‘name’: ‘GeForce RTX 3090’}], ‘hostname’: ‘SPGPU’, ‘lane’: ‘default’, ‘monitor_port’: None, ‘name’: ‘SPGPU’, ‘resource_fixed’: {‘SSD’: True}, ‘resource_slots’: {‘CPU’: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63], ‘GPU’: [0, 1, 2, 3], ‘RAM’: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]}, ‘ssh_str’: ‘spuser@SPGPU’, ‘title’: ‘Worker node SPGPU’, ‘type’: ‘node’, ‘worker_bin_path’: ‘/opt/cryosparc/cryosparc_worker/bin/cryosparcw’}}
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat
**custom thread exception hook caught something
**** handle exception rc
set status to failed
**custom thread exception hook caught something
**** handle exception rc
set status to failed
**custom thread exception hook caught something
**** handle exception rc
set status to failed
**custom thread exception hook caught something
**** handle exception rc
set status to failed
**** handle exception rc
set status to failed
========= main process now complete.
========= monitor process now complete.

I tried side by side, test run the T20S dataset was successful. Do I need change any setting on running the EMPIAR-10146 data?

Thanks

Hi @yeyuzhuangyuan,

if I recall correctly, EMPIAR-10146 has a spherical aberration of 0.0. that setting has been known to create issues (this will be fixed at some point). Can you try processing with Cs 0.01, and see if that works?

–Harris

tried and It works, thanks.

1 Like

Hello,

I’d like to report that this error in Patch CTF has not been fixed yet in v3.3.2.

The movies were collected with a Cs corrector and imported with Cs=0. They were aligned via Patch Motion, and CTFFIND ran fine for these mics.

In our case, re-importing with Cs=0.01 did not solve the issue - re-importing with Cs=0.001 worked :man_shrugging:

Best,
Kookjoo

This issue has been addressed in yesterday’s patch.

1 Like