Hello CryoSPARC Team,
CryoSPARC versions: v4.4.1
I was running an Homogenous Refinement Job and encountered the following error:
Traceback (most recent call last):
File "cryosparc_master/cryosparc_compute/run.py", line 95, in cryosparc_master.cryosparc_compute.run.main
File "cryosparc_master/cryosparc_compute/jobs/refine/newrun.py", line 694, in cryosparc_master.cryosparc_compute.jobs.refine.newrun.run_homo_refine
TypeError: '>' not supported between instances of 'NoneType' and 'int'
The input for the Refinement Job were the particles and the volume from an Ab-initio Reconstruction.
The Refinement Job did run the first iteration successfully, at least so it looks from the Event Log.
I would like to attach the exported/downloaded Event Log, however, apparently pdf is not supported in the editor here on CryoSPARC discuss in which one writes their questions/issue, which is unfortunate since the export is quite a nice and convenient way to share the Event Log.
Below the Log file:
================= CRYOSPARCW ======= 2024-01-10 00:41:57.772570 =========
Project P117 Job J226
Master ...
===========================================================================
========= monitor process now starting main process at 2024-01-10 00:41:57.772601
MAINPROCESS PID 517744
MAIN PID 517744
refine.newrun cryosparc_compute.jobs.jobregister
========= monitor process now waiting for main process
gpufft: creating new cufft plan (plan id 0 pid 517744)
gpu_id 1
ndims 2
dims 360 360 0
inembed 360 362 0
istride 1
idist 130320
onembed 360 181 0
ostride 1
odist 65160
batch 500
type R2C
wkspc automatic
Python traceback:
gpufft: creating new cufft plan (plan id 1 pid 517744)
gpu_id 1
ndims 2
dims 360 360 0
inembed 360 362 0
istride 1
idist 130320
onembed 360 181 0
ostride 1
odist 65160
batch 180
type R2C
wkspc automatic
Python traceback:
========= sending heartbeat at 2024-01-10 00:42:10.648990
gpufft: creating new cufft plan (plan id 2 pid 517744)
gpu_id 1
ndims 2
dims 360 360 0
inembed 360 360 0
istride 1
idist 129600
onembed 360 360 0
ostride 1
odist 129600
batch 500
type C2C
wkspc automatic
Python traceback:
***************************************************************
Running job J226 of type homo_refine_new
Running job on hostname %s erc3050.erc.kfa-juelich.de
Allocated Resources : {'fixed': {'SSD': True}, 'hostname': 'erc3050.erc.kfa-juelich.de', 'lane': 'ERC3050', 'lane_type': 'node', 'license': True, 'licenses_acquired': 1, 'slots': {'CPU': [4, 5, 6, 7], 'GPU': [1], 'RAM': [3, 4, 5]}, 'target': {'cache_path': '/Local/cryosparc/cryosparc_scratch/', 'cache_quota_mb': None, 'cache_reserve_mb': 10000, 'desc': None, 'gpus': [{'id': 0, 'mem': 8359378944, 'name': 'NVIDIA GeForce RTX 2070 SUPER'}, {'id': 1, 'mem': 8361738240, 'name': 'NVIDIA GeForce RTX 2070 SUPER'}], 'hostname': 'erc3050.erc.kfa-juelich.de', 'lane': 'ERC3050', 'monitor_port': None, 'name': 'erc3050.erc.kfa-juelich.de', 'resource_fixed': {'SSD': True}, 'resource_slots': {'CPU': [0, 1, 2, 3, 4, 5, 6, 7], 'GPU': [0, 1], 'RAM': [0, 1, 2, 3, 4, 5, 6, 7]}, 'ssh_str': 'mann@erc3050.erc.kfa-juelich.de', 'title': 'Worker node erc3050.erc.kfa-juelich.de', 'type': 'node', 'worker_bin_path': '/Local/cryosparc/cryosparc2_worker_remote/bin/cryosparcw'}}
HOST ALLOCATION FUNCTION: using numba.cuda.pinned_array
========= sending heartbeat at 2024-01-10 00:42:20.665419
gpufft: creating new cufft plan (plan id 3 pid 517744)
gpu_id 1
ndims 3
dims 360 360 360
inembed 360 360 362
istride 1
idist 46915200
onembed 360 360 181
ostride 1
odist 23457600
batch 1
type R2C
wkspc automatic
Python traceback:
gpufft: creating new cufft plan (plan id 4 pid 517744)
gpu_id 1
ndims 2
dims 360 360 0
inembed 360 362 0
istride 1
idist 130320
onembed 360 181 0
ostride 1
odist 65160
batch 500
type R2C
wkspc automatic
Python traceback:
gpufft: creating new cufft plan (plan id 5 pid 517744)
gpu_id 1
ndims 2
dims 360 360 0
inembed 360 362 0
istride 1
idist 130320
onembed 360 181 0
ostride 1
odist 65160
batch 341
type R2C
wkspc automatic
Python traceback:
gpufft: creating new cufft plan (plan id 6 pid 517744)
gpu_id 1
ndims 2
dims 360 360 0
inembed 360 362 0
istride 1
idist 130320
onembed 360 181 0
ostride 1
odist 65160
batch 340
type R2C
wkspc automatic
Python traceback:
========= sending heartbeat at 2024-01-10 00:42:30.680966
gpufft: creating new cufft plan (plan id 7 pid 517744)
gpu_id 1
ndims 3
dims 360 360 360
inembed 360 360 362
istride 1
idist 46915200
onembed 360 360 181
ostride 1
odist 23457600
batch 1
type R2C
wkspc manual
Python traceback:
gpufft: creating new cufft plan (plan id 8 pid 517744)
gpu_id 1
ndims 3
dims 180 180 180
inembed 180 180 91
istride 1
idist 2948400
onembed 180 180 182
ostride 1
odist 5896800
batch 1
type C2R
wkspc manual
Python traceback:
========= sending heartbeat at 2024-01-10 00:42:40.696622
========= sending heartbeat at 2024-01-10 00:42:50.712632
========= sending heartbeat at 2024-01-10 00:43:00.730941
========= sending heartbeat at 2024-01-10 00:43:10.749966
========= sending heartbeat at 2024-01-10 00:43:20.765554
========= sending heartbeat at 2024-01-10 00:43:30.779964
**** handle exception rc
/Local/cryosparc/cryosparc2_worker_remote/cryosparc_compute/jobs/motioncorrection/mic_utils.py:95: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
@jit(nogil=True)
/Local/cryosparc/cryosparc2_worker_remote/cryosparc_compute/micrographs.py:563: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def contrast_normalization(arr_bin, tile_size = 128):
/Local/cryosparc/cryosparc2_worker_remote/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/multiprocessing/process.py:108: UserWarning: Cannot manually free CUDA array; will be freed when garbage collected
self._target(*self._args, **self._kwargs)
/Local/cryosparc/cryosparc2_worker_remote/cryosparc_compute/plotutil.py:565: RuntimeWarning: divide by zero encountered in log
logabs = n.log(n.abs(fM))
/Local/cryosparc/cryosparc2_worker_remote/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/numba/cuda/dispatcher.py:538: NumbaPerformanceWarning: Grid size 1 will likely result in GPU under-utilization due to low occupancy.
warn(NumbaPerformanceWarning(msg))
/Local/cryosparc/cryosparc2_worker_remote/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/multiprocessing/process.py:108: UserWarning: Cannot manually free CUDA array; will be freed when garbage collected
self._target(*self._args, **self._kwargs)
/Local/cryosparc/cryosparc2_worker_remote/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/multiprocessing/process.py:108: UserWarning: Cannot manually free CUDA array; will be freed when garbage collected
self._target(*self._args, **self._kwargs)
/Local/cryosparc/cryosparc2_worker_remote/cryosparc_compute/plotutil.py:565: RuntimeWarning: divide by zero encountered in log
logabs = n.log(n.abs(fM))
/Local/cryosparc/cryosparc2_worker_remote/cryosparc_compute/plotutil.py:38: RuntimeWarning: invalid value encountered in sqrt
cradwn = n.sqrt(cradwn)
/Local/cryosparc/cryosparc2_worker_remote/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/multiprocessing/process.py:108: UserWarning: Cannot manually free CUDA array; will be freed when garbage collected
self._target(*self._args, **self._kwargs)
Traceback (most recent call last):
File "cryosparc_master/cryosparc_compute/run.py", line 95, in cryosparc_master.cryosparc_compute.run.main
File "cryosparc_master/cryosparc_compute/jobs/refine/newrun.py", line 694, in cryosparc_master.cryosparc_compute.jobs.refine.newrun.run_homo_refine
TypeError: '>' not supported between instances of 'NoneType' and 'int'
set status to failed
========= main process now complete at 2024-01-10 00:43:32.026635.
========= monitor process now complete at 2024-01-10 00:43:32.033843.
Based on this information, do you have any idea what could be causing this issue?