Deep Picker Inference: ValueError: need more than 1 value to unpack

Dear all,

I am trying to use Deep Picker. The Deep Picker Train seems to finish successfully, but the Deep Picker Inference fails at the start. Here is Tracback:

Traceback (most recent call last):
File “cryosparc_worker/cryosparc_compute/run.py”, line 84, in cryosparc_compute.run.main
File “cryosparc_worker/cryosparc_compute/jobs/deep_picker/run_deep_picker.py”, line 535, in cryosparc_compute.jobs.deep_picker.run_deep_picker.run_deep_picker_inference
File “cryosparc_worker/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 861, in cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker
File “cryosparc_worker/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 866, in cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker
File “cryosparc_worker/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 802, in cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker._do_picking
File “cryosparc_worker/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 569, in cryosparc_compute.jobs.deep_picker.deep_picker_utils.get_dataset_from_particle_centers
ValueError: need more than 1 value to unpack

Any ideas what could be the issue?
I tried a couple of times with different settings for the training and the inference, but the same error always pops up.

Best!

Hi,

The only way I could see this happening is by having a micrograph that is too small. Does the job fail on the first micrograph? Can you try splitting the micrograph dataset into two for inference and seeing whether it proceeds as normal?

Louis

Hi,

sorry for the delay.
Yes it fails on the first micrograph.
I tried splitting the dataset with the ‘Exposure Sets Tool’ and used only the ‘split 1’.

Here are some more details from the ‘cryosparcm joblog’:

================= CRYOSPARCW ======= 2021-09-27 16:44:54.502083 =========
Project P9 Job J67
Master name.si Port 39002
===========================================================================
========= monitor process now starting main process
MAINPROCESS PID 27577
MAIN PID 27577
deep_picker.run_deep_picker cryosparc_compute.jobs.jobregister
========= monitor process now waiting for main process
2021-09-27 16:45:35.336419: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat
2021-09-27 16:48:06.175803: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-09-27 16:48:06.347478: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2021-09-27 16:48:06.787753: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-27 16:48:06.789861: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:81:00.0 name: NVIDIA Tesla V100S-PCIE-32GB computeCapability: 7.0
coreClock: 1.597GHz coreCount: 80 deviceMemorySize: 31.75GiB deviceMemoryBandwidth: 1.03TiB/s
2021-09-27 16:48:06.789904: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-09-27 16:48:06.829678: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-09-27 16:48:06.829793: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-09-27 16:48:06.848658: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-09-27 16:48:06.853664: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-09-27 16:48:06.904308: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-09-27 16:48:06.923926: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2021-09-27 16:48:06.941617: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-09-27 16:48:06.941884: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-27 16:48:06.943902: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-27 16:48:07.711098: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2021-09-27 16:48:09.083768: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-09-27 16:48:09.084092: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-09-27 16:48:09.084429: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-27 16:48:09.086343: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:81:00.0 name: NVIDIA Tesla V100S-PCIE-32GB computeCapability: 7.0
coreClock: 1.597GHz coreCount: 80 deviceMemorySize: 31.75GiB deviceMemoryBandwidth: 1.03TiB/s
2021-09-27 16:48:09.086400: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-09-27 16:48:09.086440: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-09-27 16:48:09.086462: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-09-27 16:48:09.086483: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-09-27 16:48:09.086504: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-09-27 16:48:09.086525: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-09-27 16:48:09.086545: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2021-09-27 16:48:09.086566: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-09-27 16:48:09.086669: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-27 16:48:09.088562: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-27 16:48:09.090352: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2021-09-27 16:48:09.090400: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
========= sending heartbeat
2021-09-27 16:48:13.311419: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-09-27 16:48:13.311475: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267] 0
2021-09-27 16:48:13.311488: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0: N
2021-09-27 16:48:13.311774: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-27 16:48:13.313722: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-27 16:48:13.315613: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-27 16:48:13.317436: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 128 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100S-PCIE-32GB, pci bus id: 0000:81:00.0, compute capability: 7.0)
WARNING:tensorflow:Error in loading the saved optimizer state. As a result, your model is starting with a freshly initialized optimizer.
========= sending heartbeat
2021-09-27 16:48:24.255919: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
2021-09-27 16:48:24.267841: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2894430000 Hz
2021-09-27 16:48:26.604440: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
========= sending heartbeat
========= sending heartbeat
2021-09-27 16:48:43.497277: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 302.29MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2021-09-27 16:48:43.497382: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 302.29MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2021-09-27 16:48:43.497783: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-09-27 16:48:44.272503: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-09-27 16:48:44.277712: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 548.13MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2021-09-27 16:48:44.277791: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 548.13MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2021-09-27 16:48:44.282684: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 88.00MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2021-09-27 16:48:44.282786: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 88.00MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2021-09-27 16:48:44.289440: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 592.14MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2021-09-27 16:48:44.289511: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 592.14MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2021-09-27 16:48:44.307487: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 548.16MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2021-09-27 16:48:44.307555: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 548.16MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.

Running job on hostname %s name.si
Allocated Resources : ‘DETAILS removed’

**** handle exception rc
set status to failed
========= main process now complete.
========= monitor process now complete.

Hi,

Can you describe your data a little more? Does it work with the other particle picker tools, like the manual picker?

It would be great if you could share an example micrograph so we can test with it. I will follow up via DM.

Thanks,
Louis

1 Like

I have the same issue. Deep Picker Inference fails on the first micrograph.

Traceback (most recent call last):
File “cryosparc_worker/cryosparc_compute/run.py”, line 84, in cryosparc_compute.run.main
File “cryosparc_worker/cryosparc_compute/jobs/deep_picker/run_deep_picker.py”, line 535, in cryosparc_compute.jobs.deep_picker.run_deep_picker.run_deep_picker_inference
File “cryosparc_master/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 875, in cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker
File “cryosparc_master/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 880, in cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker
File “cryosparc_master/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 802, in cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker._do_picking
File “cryosparc_master/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 569, in cryosparc_compute.jobs.deep_picker.deep_picker_utils.get_dataset_from_particle_centers
ValueError: need more than 1 value to unpack

Hi,

sorry for the delay. I am giving up for now. Besides the fact that we have issues with CUDA and TensorFlow version when we are trying to use DeepPicker and then then this ‘unpack’ error it is just not worth it.

I’ll send you an example of the micrograph via DM.

The ‘classical’ pickers (manual, template) work just fine.

Best!

Is there a solution for this problem now? I had the same error using cryoSPARC v 3.3.1+patch 211214. I would be really interested in getting this to work.

Thanks.

Hi,

I’ve been unable to replicate this error. If you have more details about the workflow and data that would be relevant that would be greatly appreciated.

Louis

Both the micrographs and deep_picker_model were input from the Deep Picker Train job, which was completed successfully. However, the inference job had an error as soon as it began to process the micrographs. Following is the log and error message:

[CPU: 80.2 MB] --------------------------------------------------------------

[CPU: 80.2 MB] Importing job module for job type deep_picker_inference…

[CPU: 443.4 MB] Job ready to run

[CPU: 443.4 MB] ***************************************************************

[CPU: 448.1 MB] Using TensorFlow version 2.4.1

[CPU: 448.1 MB] Processing micrographs and inferring particles…

[CPU: 448.1 MB] Loading model…

[CPU: 448.1 MB] Loaded model.

[CPU: 448.1 MB] 0/2501 micrographs processed.

[CPU: 2.66 GB] Traceback (most recent call last):
File “cryosparc_worker/cryosparc_compute/run.py”, line 85, in cryosparc_compute.run.main
File “cryosparc_worker/cryosparc_compute/jobs/deep_picker/run_deep_picker.py”, line 535, in cryosparc_compute.jobs.deep_picker.run_deep_picker.run_deep_picker_inference
File “cryosparc_worker/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 875, in cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker
File “cryosparc_worker/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 880, in cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker
File “cryosparc_worker/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 802, in cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker._do_picking
File “cryosparc_worker/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 569, in cryosparc_compute.jobs.deep_picker.deep_picker_utils.get_dataset_from_particle_centers
ValueError: need more than 1 value to unpack

Hi, I have the same issue with deep picker. The training finishes normally, but inference fails the way other users saw. I tried several datasets, two different lanes with different types of GPUs and am getting same errors. Below is one of the job logs:

================= CRYOSPARCW ======= 2022-07-19 16:53:24.477819 =========
Project P218 Job J58
Master cryosparc.host.utmb.edu Port 39002

========= monitor process now starting main process
MAINPROCESS PID 242447
========= monitor process now waiting for main process
MAIN PID 242447
deep_picker.run_deep_picker cryosparc_compute.jobs.jobregister
2022-07-19 16:53:26.442804: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcudart.so.11.0
========= sending heartbeat
2022-07-19 16:53:36.845068: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_x
la_enable_xla_devices not set
2022-07-19 16:53:36.848634: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcuda.so.1
2022-07-19 16:53:36.870129: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with p
roperties:
pciBusID: 0000:18:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5
coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s
2022-07-19 16:53:36.870569: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 1 with p
roperties:
pciBusID: 0000:3b:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5
coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s
2022-07-19 16:53:36.870954: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 2 with p
roperties:
pciBusID: 0000:86:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5
coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s
2022-07-19 16:53:36.871349: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 3 with p
roperties:
pciBusID: 0000:af:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5
coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s
2022-07-19 16:53:36.871388: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcudart.so.11.0
2022-07-19 16:53:36.882954: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcublas.so.11
2022-07-19 16:53:36.883058: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcublasLt.so.11
2022-07-19 16:53:36.886501: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcufft.so.10
2022-07-19 16:53:36.888658: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcurand.so.10
2022-07-19 16:53:36.891538: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcusolver.so.10
2022-07-19 16:53:36.894541: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcusparse.so.11
2022-07-19 16:53:36.896552: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcudnn.so.8
2022-07-19 16:53:36.899540: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu de
vices: 0, 1, 2, 3
2022-07-19 16:53:36.900535: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is
optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in perfo
rmance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-07-19 16:53:36.903344: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_x
la_enable_xla_devices not set
2022-07-19 16:53:36.903935: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with p
roperties:
pciBusID: 0000:18:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5
coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s
2022-07-19 16:53:36.903986: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcudart.so.11.0
2022-07-19 16:53:36.904015: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcublas.so.11
2022-07-19 16:53:36.904031: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcublasLt.so.11
2022-07-19 16:53:36.904046: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcufft.so.10
2022-07-19 16:53:36.904061: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcurand.so.10
2022-07-19 16:53:36.904076: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcusolver.so.10
2022-07-19 16:53:36.904091: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcusparse.so.11
2022-07-19 16:53:36.904106: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcudnn.so.8
2022-07-19 16:53:36.904833: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu de
vices: 0
2022-07-19 16:53:36.904870: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcudart.so.11.0
2022-07-19 16:53:37.683131: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect S
treamExecutor with strength 1 edge matrix:
2022-07-19 16:53:37.683194: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267] 0
2022-07-19 16:53:37.683209: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0: N
2022-07-19 16:53:37.684870: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow de
vice (/job:localhost/replica:0/task:0/device:GPU:0 with 128 MB memory) → physical GPU (device: 0, name: N
VIDIA GeForce RTX 2080 Ti, pci bus id: 0000:18:00.0, compute capability: 7.5)
WARNING:tensorflow:Error in loading the saved optimizer state. As a result, your model is starting with a
freshly initialized optimizer.
========= sending heartbeat
2022-07-19 16:53:46.646396: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the ML
IR optimization passes are enabled (registered 2)
2022-07-19 16:53:46.647119: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2200
000000 Hz
2022-07-19 16:53:48.217213: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcudnn.so.8
2022-07-19 16:53:50.627088: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) r
an out of memory trying to allocate 302.29MiB with freed_by_count=0. The caller indicates that this is not
a failure, but may mean that there could be performance gains if more memory were available.
2022-07-19 16:53:50.627287: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) r
an out of memory trying to allocate 302.29MiB with freed_by_count=0. The caller indicates that this is not
a failure, but may mean that there could be performance gains if more memory were available.
2022-07-19 16:53:50.627825: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcublas.so.11
2022-07-19 16:53:51.580539: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully o
pened dynamic library libcublasLt.so.11
2022-07-19 16:53:51.585989: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) r
an out of memory trying to allocate 548.13MiB with freed_by_count=0. The caller indicates that this is not
a failure, but may mean that there could be performance gains if more memory were available.
2022-07-19 16:53:51.586063: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) r
an out of memory trying to allocate 548.13MiB with freed_by_count=0. The caller indicates that this is not
a failure, but may mean that there could be performance gains if more memory were available.
2022-07-19 16:53:51.590803: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) r
an out of memory trying to allocate 88.00MiB with freed_by_count=0. The caller indicates that this is not
a failure, but may mean that there could be performance gains if more memory were available.
2022-07-19 16:53:51.590875: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) r
an out of memory trying to allocate 88.00MiB with freed_by_count=0. The caller indicates that this is not
a failure, but may mean that there could be performance gains if more memory were available.
2022-07-19 16:53:51.596573: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) r
an out of memory trying to allocate 592.14MiB with freed_by_count=0. The caller indicates that this is not
a failure, but may mean that there could be performance gains if more memory were available.
2022-07-19 16:53:51.596639: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) r
an out of memory trying to allocate 592.14MiB with freed_by_count=0. The caller indicates that this is not
a failure, but may mean that there could be performance gains if more memory were available.
2022-07-19 16:53:51.615351: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) r
an out of memory trying to allocate 548.16MiB with freed_by_count=0. The caller indicates that this is not
a failure, but may mean that there could be performance gains if more memory were available.
2022-07-19 16:53:51.615425: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) r
an out of memory trying to allocate 548.16MiB with freed_by_count=0. The caller indicates that this is not
a failure, but may mean that there could be performance gains if more memory were available.


Running job on hostname %s gimli.utmb.edu
Allocated Resources : {‘fixed’: {‘SSD’: False}, ‘hostname’: ‘gimli.utmb.edu’, ‘lane’: ‘smith’, 'lane_type
': ‘smith’, ‘license’: True, ‘licenses_acquired’: 1, ‘slots’: {‘CPU’: [0], ‘GPU’: [0], ‘RAM’: [0]}, ‘targe
t’: {‘cache_path’: ‘/mnt/scratch/cryosparc_cache’, ‘cache_quota_mb’: None, ‘cache_reserve_mb’: 10000, ‘des
c’: None, ‘gpus’: [{‘id’: 0, ‘mem’: 11554848768, ‘name’: ‘NVIDIA GeForce RTX 2080 Ti’}, {‘id’: 1, ‘mem’: 1
1554848768, ‘name’: ‘NVIDIA GeForce RTX 2080 Ti’}, {‘id’: 2, ‘mem’: 11554848768, ‘name’: ‘NVIDIA GeForce R
TX 2080 Ti’}, {‘id’: 3, ‘mem’: 11554848768, ‘name’: ‘NVIDIA GeForce RTX 2080 Ti’}], ‘hostname’: ‘gimli.utm
b.edu’, ‘lane’: ‘smith’, ‘monitor_port’: None, ‘name’: ‘gimli.utmb.edu’, ‘resource_fixed’: {‘SSD’: True},
‘resource_slots’: {‘CPU’: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 2
2, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39], ‘GPU’: [0, 1, 2, 3], ‘RAM’: [0, 1,
2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 3
0, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]}, ‘ssh_str’: ‘cryosparc@gimli.utmb.
edu’, ‘title’: ‘Worker node gimli.utmb.edu’, ‘type’: ‘node’, ‘worker_bin_path’: ‘/mnt/ape2/cryosparc/softw
are/cryosparc/cryosparc_worker/bin/cryosparcw’}}
**** handle exception rc
set status to failed

Tried to split input micrographs, still get the same error:
Launching job on lane default target cryosparc.host.utmb.edu …

Running job on master node hostname cryosparc.host.utmb.edu

[CPU: 69.2 MB] Project P218 Job J58 Started

[CPU: 69.3 MB] Master running v3.3.2+220518, worker running v3.3.2+220518

[CPU: 69.5 MB] Working in directory: /mnt/gimli/data2/P218/J58

[CPU: 69.5 MB] Running on lane default

[CPU: 69.5 MB] Resources allocated:

[CPU: 69.5 MB] Worker: cryosparc.host.utmb.edu

[CPU: 69.5 MB] CPU : [0]

[CPU: 69.5 MB] GPU : [0]

[CPU: 69.5 MB] RAM : [0]

[CPU: 69.5 MB] SSD : False

[CPU: 69.5 MB] --------------------------------------------------------------

[CPU: 69.5 MB] Importing job module for job type deep_picker_inference…

[CPU: 364.5 MB] Job ready to run

[CPU: 364.5 MB] ***************************************************************

[CPU: 368.5 MB] Using TensorFlow version 2.4.1

[CPU: 368.5 MB] Processing micrographs and inferring particles…

[CPU: 368.5 MB] Loading model…

[CPU: 368.5 MB] Loaded model.

[CPU: 368.5 MB] 0/500 micrographs processed.

[CPU: 1.97 GB] Original micrograph:
[CPU: 1.78 GB] Traceback (most recent call last):
File “cryosparc_worker/cryosparc_compute/run.py”, line 85, in cryosparc_compute.run.main
File “cryosparc_worker/cryosparc_compute/jobs/deep_picker/run_deep_picker.py”, line 535, in cryosparc_compute.jobs.deep_picker.run_deep_picker.run_deep_picker_inference
File “cryosparc_worker/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 875, in cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker
File “cryosparc_worker/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 880, in cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker
File “cryosparc_worker/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 792, in cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker._do_picking
File “/mnt/ape2/cryosparc/software/cryosparc/cryosparc_worker/cryosparc_compute/micrograph_plotutils.py”, line 45, in showarray
a = a.reshape(-1, a.shape[-2], a.shape[-1])
ValueError: cannot reshape array of size 0 into shape (0,1)

This particular error comes up with “show plots” turned on, without that I get “ValueError: need more than 1 value to unpack” error.

Another user can run successfully inference job on the same default lane. I am not sure what is the difference.
Thanks, Michael

If you and that other user agree, we would like to compare between the failed and successful Inference jobs:

  • Inference pixel threshold
  • the non-default parameters from the upstream Deep Picker Train jobs
  • the number of particles input to the upstream Deep Picker Train jobs
  • the format and pixel dimensions of the micrographs
1 Like

Hi @wtempel, I used 5 pixel threshold and tried larger values, like 204000, same result. micrographs in both cases were in mrc format, 4096x4096; went through motion correction, CTF determination, etc., all standard workflow. I tried inference job with different number of particles, ranging from ~200 to ~100,000 (coming from 2D classification job). The smaller number of particles came directly from manual picking job.
in Deep Picker Train job nonstandard parameters were: 1. Number of micrographs to process = 100; 2. Particle diameter = 500 A. The rest was set to default values.
For the other user, he used number of threads = 1; also Number of micrographs to process = 500; and same particle diameter of 500 A. Inference threshold was 5 in his case. Number of particles was 6110. Same mrc micrographs that went through same sequence of jobs. In his case images were taken with K3 camera, so the shape was 4092x5760.
I could share a couple of micrographs if needed.
Thanks,
Michael

Hello @mbs , did you find the error in the end? I am having the same issue right now.

Hi @MmM, sorry not so far. There should be something in user settings since another user here was able to run it but I am not.

Hello @mbs , sorry to hear that. None of the suggestions on the forum helped me so far. I am also missing some of the settings from the menu (see the picture). First, there were some issues with the training itself that we solved based on this discussion.. I am wondering if the errors are related.
image
Note: I also tried it without the plots.

Hi, I have the same problem. Other pickers (including Topaz) work fine. I have included screenshots with details about the exposures and the Deep Picker Train/Inference jobs.





Hi, I have exactly the same problem.

@Boszlacs @gha Please inspect the failed jobs’ job.log files for additional error messages. If there is an error about a missing libcusolver.so.10, you may try the suggestion in DeepPicker error: Input number of GPUs must be less than or equal to number of available GPUs - #33 by nikydna.

Hi,

Thanks for your reply!
I checked the job.log file and cannot find that error. There are some memory warnings at the end though:

================= CRYOSPARCW =======  2023-03-21 10:37:51.918938  =========
Project P35 Job J34
Master hulk Port 39002
===========================================================================
========= monitor process now starting main process at 2023-03-21 10:37:51.918970
MAINPROCESS PID 1008998
========= monitor process now waiting for main process
MAIN PID 1008998
deep_picker.run_deep_picker cryosparc_compute.jobs.jobregister
2023-03-21 10:37:55.205472: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2023-03-21 10:37:58.744436: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2023-03-21 10:37:58.745804: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2023-03-21 10:37:58.812148: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: 
pciBusID: 0000:01:00.0 name: NVIDIA GeForce RTX 3090 computeCapability: 8.6
coreClock: 1.695GHz coreCount: 82 deviceMemorySize: 23.70GiB deviceMemoryBandwidth: 871.81GiB/s
2023-03-21 10:37:58.813368: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 1 with properties: 
pciBusID: 0000:25:00.0 name: NVIDIA GeForce RTX 3090 computeCapability: 8.6
coreClock: 1.695GHz coreCount: 82 deviceMemorySize: 23.70GiB deviceMemoryBandwidth: 871.81GiB/s
2023-03-21 10:37:58.814568: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 2 with properties: 
pciBusID: 0000:c1:00.0 name: NVIDIA GeForce RTX 3090 computeCapability: 8.6
coreClock: 1.695GHz coreCount: 82 deviceMemorySize: 23.70GiB deviceMemoryBandwidth: 871.81GiB/s
2023-03-21 10:37:58.815772: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 3 with properties: 
pciBusID: 0000:e1:00.0 name: NVIDIA GeForce RTX 3090 computeCapability: 8.6
coreClock: 1.695GHz coreCount: 82 deviceMemorySize: 23.70GiB deviceMemoryBandwidth: 871.81GiB/s
2023-03-21 10:37:58.815792: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2023-03-21 10:37:58.818758: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2023-03-21 10:37:58.818802: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2023-03-21 10:37:58.820567: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2023-03-21 10:37:58.820777: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2023-03-21 10:37:58.822218: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2023-03-21 10:37:58.822942: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2023-03-21 10:37:58.823063: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2023-03-21 10:37:58.832745: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0, 1, 2, 3
2023-03-21 10:37:58.833281: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-03-21 10:37:58.840840: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2023-03-21 10:37:58.842174: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: 
pciBusID: 0000:01:00.0 name: NVIDIA GeForce RTX 3090 computeCapability: 8.6
coreClock: 1.695GHz coreCount: 82 deviceMemorySize: 23.70GiB deviceMemoryBandwidth: 871.81GiB/s
2023-03-21 10:37:58.842204: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2023-03-21 10:37:58.842236: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2023-03-21 10:37:58.842249: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2023-03-21 10:37:58.842262: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2023-03-21 10:37:58.842274: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2023-03-21 10:37:58.842286: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2023-03-21 10:37:58.842298: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2023-03-21 10:37:58.842309: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2023-03-21 10:37:58.844671: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2023-03-21 10:37:58.844708: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2023-03-21 10:37:59.304435: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2023-03-21 10:37:59.304499: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267]      0 
2023-03-21 10:37:59.304506: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0:   N 
2023-03-21 10:37:59.308668: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 128 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:01:00.0, compute capability: 8.6)
WARNING:tensorflow:Error in loading the saved optimizer state. As a result, your model is starting with a freshly initialized optimizer.
2023-03-21 10:38:04.087770: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
2023-03-21 10:38:04.107091: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2499920000 Hz
2023-03-21 10:38:04.974445: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
========= sending heartbeat at 2023-03-21 10:38:05.091907
2023-03-21 10:38:06.870181: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 302.29MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2023-03-21 10:38:06.870266: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 302.29MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2023-03-21 10:38:06.870568: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2023-03-21 10:38:07.551969: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2023-03-21 10:38:07.555270: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 548.13MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2023-03-21 10:38:07.555307: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 548.13MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2023-03-21 10:38:07.558189: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 88.00MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2023-03-21 10:38:07.558229: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 88.00MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2023-03-21 10:38:07.587943: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 592.14MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2023-03-21 10:38:07.588032: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 592.14MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2023-03-21 10:38:07.601447: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 548.16MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2023-03-21 10:38:07.601483: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 548.16MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
***************************************************************
Running job on hostname %s 11.0.0.2
Allocated Resources :  {'fixed': {'SSD': False}, 'hostname': '11.0.0.2', 'lane': 'echo', 'lane_type': 'node', 'license': True, 'licenses_acquired': 1, 'slots': {'CPU': [0], 'GPU': [0], 'RAM': [0]}, 'target': {'cache_path': '/scratch/cryosparc_cache', 'cache_quota_mb': None, 'cache_reserve_mb': 10000, 'desc': None, 'gpus': [{'id': 0, 'mem': 25447170048, 'name': 'NVIDIA GeForce RTX 3090'}, {'id': 1, 'mem': 25447170048, 'name': 'NVIDIA GeForce RTX 3090'}, {'id': 2, 'mem': 25447170048, 'name': 'NVIDIA GeForce RTX 3090'}, {'id': 3, 'mem': 25447170048, 'name': 'NVIDIA GeForce RTX 3090'}], 'hostname': '11.0.0.2', 'lane': 'echo', 'monitor_port': None, 'name': '11.0.0.2', 'resource_fixed': {'SSD': True}, 'resource_slots': {'CPU': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127], 'GPU': [0, 1, 2, 3], 'RAM': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63]}, 'ssh_str': 'cryosparcuser@11.0.0.2', 'title': 'Worker node 11.0.0.2', 'type': 'node', 'worker_bin_path': '/home/cryosparcuser/cryosparc/cryosparc_worker/bin/cryosparcw'}}
**** handle exception rc
Traceback (most recent call last):
  File "cryosparc_master/cryosparc_compute/run.py", line 96, in cryosparc_compute.run.main
  File "cryosparc_master/cryosparc_compute/jobs/deep_picker/run_deep_picker.py", line 535, in cryosparc_compute.jobs.deep_picker.run_deep_picker.run_deep_picker_inference
  File "cryosparc_master/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py", line 874, in cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker
  File "cryosparc_master/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py", line 879, in cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker
  File "cryosparc_master/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py", line 801, in cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker._do_picking
  File "cryosparc_master/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py", line 569, in cryosparc_compute.jobs.deep_picker.deep_picker_utils.get_dataset_from_particle_centers
ValueError: need more than 1 value to unpack
set status to failed
========= main process now complete at 2023-03-21 10:38:08.277377.
========= monitor process now complete at 2023-03-21 10:38:08.284964.

Did you try the suggestion in Deep Picker Inference V 4.0.0, ValueError: need more than 1 value to unpack - #4 by prash?