Homogenous Refinement - TypeError: '>' not supported between instances of 'NoneType' and 'int'

Hello CryoSPARC Team,

CryoSPARC versions: v4.4.1
I was running an Homogenous Refinement Job and encountered the following error:

Traceback (most recent call last):
  File "cryosparc_master/cryosparc_compute/run.py", line 95, in cryosparc_master.cryosparc_compute.run.main
  File "cryosparc_master/cryosparc_compute/jobs/refine/newrun.py", line 694, in cryosparc_master.cryosparc_compute.jobs.refine.newrun.run_homo_refine
TypeError: '>' not supported between instances of 'NoneType' and 'int'

The input for the Refinement Job were the particles and the volume from an Ab-initio Reconstruction.
The Refinement Job did run the first iteration successfully, at least so it looks from the Event Log.
I would like to attach the exported/downloaded Event Log, however, apparently pdf is not supported in the editor here on CryoSPARC discuss in which one writes their questions/issue, which is unfortunate since the export is quite a nice and convenient way to share the Event Log.

Below the Log file:

================= CRYOSPARCW =======  2024-01-10 00:41:57.772570  =========
Project P117 Job J226
Master ...
===========================================================================
========= monitor process now starting main process at 2024-01-10 00:41:57.772601
MAINPROCESS PID 517744
MAIN PID 517744
refine.newrun cryosparc_compute.jobs.jobregister
========= monitor process now waiting for main process
gpufft: creating new cufft plan (plan id 0   pid 517744) 
	gpu_id  1 
	ndims   2 
	dims    360 360 0 
	inembed 360 362 0 
	istride 1 
	idist   130320 
	onembed 360 181 0 
	ostride 1 
	odist   65160 
	batch   500 
	type    R2C 
	wkspc   automatic 
	Python traceback:

gpufft: creating new cufft plan (plan id 1   pid 517744) 
	gpu_id  1 
	ndims   2 
	dims    360 360 0 
	inembed 360 362 0 
	istride 1 
	idist   130320 
	onembed 360 181 0 
	ostride 1 
	odist   65160 
	batch   180 
	type    R2C 
	wkspc   automatic 
	Python traceback:

========= sending heartbeat at 2024-01-10 00:42:10.648990
gpufft: creating new cufft plan (plan id 2   pid 517744) 
	gpu_id  1 
	ndims   2 
	dims    360 360 0 
	inembed 360 360 0 
	istride 1 
	idist   129600 
	onembed 360 360 0 
	ostride 1 
	odist   129600 
	batch   500 
	type    C2C 
	wkspc   automatic 
	Python traceback:

***************************************************************
Running job  J226  of type  homo_refine_new
Running job on hostname %s erc3050.erc.kfa-juelich.de
Allocated Resources :  {'fixed': {'SSD': True}, 'hostname': 'erc3050.erc.kfa-juelich.de', 'lane': 'ERC3050', 'lane_type': 'node', 'license': True, 'licenses_acquired': 1, 'slots': {'CPU': [4, 5, 6, 7], 'GPU': [1], 'RAM': [3, 4, 5]}, 'target': {'cache_path': '/Local/cryosparc/cryosparc_scratch/', 'cache_quota_mb': None, 'cache_reserve_mb': 10000, 'desc': None, 'gpus': [{'id': 0, 'mem': 8359378944, 'name': 'NVIDIA GeForce RTX 2070 SUPER'}, {'id': 1, 'mem': 8361738240, 'name': 'NVIDIA GeForce RTX 2070 SUPER'}], 'hostname': 'erc3050.erc.kfa-juelich.de', 'lane': 'ERC3050', 'monitor_port': None, 'name': 'erc3050.erc.kfa-juelich.de', 'resource_fixed': {'SSD': True}, 'resource_slots': {'CPU': [0, 1, 2, 3, 4, 5, 6, 7], 'GPU': [0, 1], 'RAM': [0, 1, 2, 3, 4, 5, 6, 7]}, 'ssh_str': 'mann@erc3050.erc.kfa-juelich.de', 'title': 'Worker node erc3050.erc.kfa-juelich.de', 'type': 'node', 'worker_bin_path': '/Local/cryosparc/cryosparc2_worker_remote/bin/cryosparcw'}}
HOST ALLOCATION FUNCTION: using numba.cuda.pinned_array
========= sending heartbeat at 2024-01-10 00:42:20.665419
gpufft: creating new cufft plan (plan id 3   pid 517744) 
	gpu_id  1 
	ndims   3 
	dims    360 360 360 
	inembed 360 360 362 
	istride 1 
	idist   46915200 
	onembed 360 360 181 
	ostride 1 
	odist   23457600 
	batch   1 
	type    R2C 
	wkspc   automatic 
	Python traceback:

gpufft: creating new cufft plan (plan id 4   pid 517744) 
	gpu_id  1 
	ndims   2 
	dims    360 360 0 
	inembed 360 362 0 
	istride 1 
	idist   130320 
	onembed 360 181 0 
	ostride 1 
	odist   65160 
	batch   500 
	type    R2C 
	wkspc   automatic 
	Python traceback:

gpufft: creating new cufft plan (plan id 5   pid 517744) 
	gpu_id  1 
	ndims   2 
	dims    360 360 0 
	inembed 360 362 0 
	istride 1 
	idist   130320 
	onembed 360 181 0 
	ostride 1 
	odist   65160 
	batch   341 
	type    R2C 
	wkspc   automatic 
	Python traceback:

gpufft: creating new cufft plan (plan id 6   pid 517744) 
	gpu_id  1 
	ndims   2 
	dims    360 360 0 
	inembed 360 362 0 
	istride 1 
	idist   130320 
	onembed 360 181 0 
	ostride 1 
	odist   65160 
	batch   340 
	type    R2C 
	wkspc   automatic 
	Python traceback:

========= sending heartbeat at 2024-01-10 00:42:30.680966
gpufft: creating new cufft plan (plan id 7   pid 517744) 
	gpu_id  1 
	ndims   3 
	dims    360 360 360 
	inembed 360 360 362 
	istride 1 
	idist   46915200 
	onembed 360 360 181 
	ostride 1 
	odist   23457600 
	batch   1 
	type    R2C 
	wkspc   manual 
	Python traceback:

gpufft: creating new cufft plan (plan id 8   pid 517744) 
	gpu_id  1 
	ndims   3 
	dims    180 180 180 
	inembed 180 180 91 
	istride 1 
	idist   2948400 
	onembed 180 180 182 
	ostride 1 
	odist   5896800 
	batch   1 
	type    C2R 
	wkspc   manual 
	Python traceback:

========= sending heartbeat at 2024-01-10 00:42:40.696622
========= sending heartbeat at 2024-01-10 00:42:50.712632
========= sending heartbeat at 2024-01-10 00:43:00.730941
========= sending heartbeat at 2024-01-10 00:43:10.749966
========= sending heartbeat at 2024-01-10 00:43:20.765554
========= sending heartbeat at 2024-01-10 00:43:30.779964
**** handle exception rc
/Local/cryosparc/cryosparc2_worker_remote/cryosparc_compute/jobs/motioncorrection/mic_utils.py:95: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
  @jit(nogil=True)
/Local/cryosparc/cryosparc2_worker_remote/cryosparc_compute/micrographs.py:563: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
  def contrast_normalization(arr_bin, tile_size = 128):
/Local/cryosparc/cryosparc2_worker_remote/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/multiprocessing/process.py:108: UserWarning: Cannot manually free CUDA array; will be freed when garbage collected
  self._target(*self._args, **self._kwargs)
/Local/cryosparc/cryosparc2_worker_remote/cryosparc_compute/plotutil.py:565: RuntimeWarning: divide by zero encountered in log
  logabs = n.log(n.abs(fM))
/Local/cryosparc/cryosparc2_worker_remote/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/numba/cuda/dispatcher.py:538: NumbaPerformanceWarning: Grid size 1 will likely result in GPU under-utilization due to low occupancy.
  warn(NumbaPerformanceWarning(msg))
/Local/cryosparc/cryosparc2_worker_remote/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/multiprocessing/process.py:108: UserWarning: Cannot manually free CUDA array; will be freed when garbage collected
  self._target(*self._args, **self._kwargs)
/Local/cryosparc/cryosparc2_worker_remote/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/multiprocessing/process.py:108: UserWarning: Cannot manually free CUDA array; will be freed when garbage collected
  self._target(*self._args, **self._kwargs)
/Local/cryosparc/cryosparc2_worker_remote/cryosparc_compute/plotutil.py:565: RuntimeWarning: divide by zero encountered in log
  logabs = n.log(n.abs(fM))
/Local/cryosparc/cryosparc2_worker_remote/cryosparc_compute/plotutil.py:38: RuntimeWarning: invalid value encountered in sqrt
  cradwn = n.sqrt(cradwn)
/Local/cryosparc/cryosparc2_worker_remote/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/multiprocessing/process.py:108: UserWarning: Cannot manually free CUDA array; will be freed when garbage collected
  self._target(*self._args, **self._kwargs)
Traceback (most recent call last):
  File "cryosparc_master/cryosparc_compute/run.py", line 95, in cryosparc_master.cryosparc_compute.run.main
  File "cryosparc_master/cryosparc_compute/jobs/refine/newrun.py", line 694, in cryosparc_master.cryosparc_compute.jobs.refine.newrun.run_homo_refine
TypeError: '>' not supported between instances of 'NoneType' and 'int'
set status to failed
========= main process now complete at 2024-01-10 00:43:32.026635.
========= monitor process now complete at 2024-01-10 00:43:32.033843.

Based on this information, do you have any idea what could be causing this issue?

Which version of CryoSPARC do you use?

In recent versions of CryoSPARC, you may print text from the event log using the command

cryosparcm eventlog P177 J226

I am using CryoSPARC version v4.4.1
I am afraid I do not have access to the master node with a cryosparc_master installation. Thus I think I cannot make use of the command line…

However, I am more than happy to provide more information. If you need any more specific information I can certainly provide it. If needed I can also simply copy the parts from the Event Log.

In this case, please email us the job report. I will send you a direct message with the email address.

I have sent the email with the respective Job Report attached.

Thank you for the help and the fast response.

Hi @D_E_N_N_I_S,

Based on the job log, it looks like this error is happening because the parameter Number of extra final passes in the refinement job is not set to a numerical value. Can you see if setting this to the default of 0 fixes things?

Thanks!
Valentin

Hi Valentin,

thanks for getting back to me and sorry for the late response.
The parameter “Number of final passes” was set to 3, so a numerical value was set for this parameter.

We believe that the initial model (provided by a previous Ab-Initio Reconstruction Job) was of such poor quality that this resulted in the failure of the Homogenous Refinement Job. Other initial models work fine and an improved Ab-Initio Reconstruction (optimizing 2D classes and parameters of the Ab-Initio Reconstruction) eventually produced volumes that could be used in the Homogenous Refinement Job.
So from this, I would say it can be considered “solved”. However, I have to say that the encountered error message is rather cryptic and is not necessarily pointing in the right direction.
Thanks to everyone involved for looking into it.

Best
Dennis

1 Like

Ok, thanks for the update! Quick additional note: in case this ever happens again, one more idea is that the initial error may have been related to this known bug in Firefox where changing a parameter via the arrow buttons doesn’t properly set it once the job is launched.

Valentin