I got a similar error when I test run using the dataset from EMPIAR-10146. Import and motion correct job run successfully, but got error when running “patch CTF estimation (multi)”
After processing a few images,then an error came up
Traceback (most recent call last):
File “/opt/cryosparc/cryosparc_worker/cryosparc_compute/jobs/runcommon.py”, line 1726, in run_with_except_hook
run_old(*args, **kw)
File “/opt/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/threading.py”, line 870, in run
self._target(*self._args, **self._kwargs)
File “/opt/cryosparc/cryosparc_worker/cryosparc_compute/jobs/pipeline.py”, line 186, in thread_work
work = processor.exec(item)
File “/opt/cryosparc/cryosparc_worker/cryosparc_compute/jobs/pipeline.py”, line 43, in exec
return self.process(item)
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 112, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 118, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py”, line 119, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 71, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 832, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217
File “cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/patchctf.py”, line 782, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217.compute_cc_fit
ValueError: cannot convert float NaN to integer
cryosparcm joblog output was following
Project P4 Job J3
Master SPGPU Port 39002
========= monitor process now starting main process
MAINPROCESS PID 3018347
========= monitor process now waiting for main process
MAIN PID 3018347
ctf_estimation.run cryosparc_compute.jobs.jobregister
Running job on hostname %s SPGPU
Allocated Resources : {‘fixed’: {‘SSD’: False}, ‘hostname’: ‘SPGPU’, ‘lane’: ‘default’, ‘lane_type’: ‘default’, ‘license’: True, ‘licenses_acquired’: 4, ‘slots’: {‘CPU’: [0, 1, 2, 3, 4, 5, 6, 7], ‘GPU’: [0, 1, 2, 3], ‘RAM’: [0]}, ‘target’: {‘cache_path’: ‘/ssd/cryosparc_cache’, ‘cache_quota_mb’: None, ‘cache_reserve_mb’: 10000, ‘desc’: None, ‘gpus’: [{‘id’: 0, ‘mem’: 25447170048, ‘name’: ‘GeForce RTX 3090’}, {‘id’: 1, ‘mem’: 25445793792, ‘name’: ‘GeForce RTX 3090’}, {‘id’: 2, ‘mem’: 25447170048, ‘name’: ‘GeForce RTX 3090’}, {‘id’: 3, ‘mem’: 25447170048, ‘name’: ‘GeForce RTX 3090’}], ‘hostname’: ‘SPGPU’, ‘lane’: ‘default’, ‘monitor_port’: None, ‘name’: ‘SPGPU’, ‘resource_fixed’: {‘SSD’: True}, ‘resource_slots’: {‘CPU’: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63], ‘GPU’: [0, 1, 2, 3], ‘RAM’: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]}, ‘ssh_str’: ‘spuser@SPGPU’, ‘title’: ‘Worker node SPGPU’, ‘type’: ‘node’, ‘worker_bin_path’: ‘/opt/cryosparc/cryosparc_worker/bin/cryosparcw’}}
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat
**custom thread exception hook caught something
**** handle exception rc
set status to failed
**custom thread exception hook caught something
**** handle exception rc
set status to failed
**custom thread exception hook caught something
**** handle exception rc
set status to failed
**custom thread exception hook caught something
**** handle exception rc
set status to failed
**** handle exception rc
set status to failed
========= main process now complete.
========= monitor process now complete.
I tried side by side, test run the T20S dataset was successful. Do I need change any setting on running the EMPIAR-10146 data?
Thanks