Deep Picker Inference V 4.0.0, ValueError: need more than 1 value to unpack

Getting this error when running the job. Log attached.

> [2022-10-11 13:29:30.25] License is valid.
> [2022-10-11 13:29:30.25] Launching job on lane default target ultron ...
> [2022-10-11 13:29:30.27] Running job on master node hostname ultron
> [2022-10-11 13:29:31.23] [CPU: 83.8 MB] Job J338 Started
> [2022-10-11 13:29:31.24] [CPU: 83.9 MB] Master running v4.0.0, worker running v4.0.0
> [2022-10-11 13:29:31.26] [CPU: 84.1 MB] Working in directory: /ultron/em/cryosparc/P3/J338
> [2022-10-11 13:29:31.26] [CPU: 84.1 MB] Running on lane default
> [2022-10-11 13:29:31.27] [CPU: 84.1 MB] Resources allocated:
> [2022-10-11 13:29:31.27] [CPU: 84.1 MB] Worker: ultron
> [2022-10-11 13:29:31.27] [CPU: 84.1 MB] CPU : [2]
> [2022-10-11 13:29:31.27] [CPU: 84.1 MB] GPU : [2]
> [2022-10-11 13:29:31.28] [CPU: 84.1 MB] RAM : [3]
> [2022-10-11 13:29:31.28] [CPU: 84.1 MB] SSD : False
> [2022-10-11 13:29:31.28] [CPU: 84.1 MB] --------------------------------------------------------------
> [2022-10-11 13:29:31.29] [CPU: 84.1 MB] Importing job module for job type deep_picker_inference...
> [2022-10-11 13:29:35.69] [CPU: 380.6 MB] Job ready to run
> [2022-10-11 13:29:35.69] [CPU: 380.6 MB] ***************************************************************
> [2022-10-11 13:29:35.79] [CPU: 386.0 MB] Using TensorFlow version 2.4.4
> [2022-10-11 13:29:35.80] [CPU: 386.0 MB] Processing micrographs and inferring particles...
> [2022-10-11 13:29:35.80] [CPU: 386.0 MB] Loading model...
> [2022-10-11 13:29:35.80] [CPU: 386.0 MB] Loaded model.
> [2022-10-11 13:29:35.81] [CPU: 386.0 MB] 0/1407 micrographs processed.
> [2022-10-11 13:29:45.32] [CPU: 3.93 GB] Traceback (most recent call last):
 File "cryosparc_worker/cryosparc_compute/run.py", line 93, in
cryosparc_compute.run.main
 File "cryosparc_worker/cryosparc_compute/jobs/deep_picker/run_deep_picker.py", line
535, in cryosparc_compute.jobs.deep_picker.run_deep_picker.run_deep_picker_inference
 File "cryosparc_worker/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py",
line 875, in
cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker
 File "cryosparc_worker/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py",
line 880, in
cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker
 File "cryosparc_worker/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py",
line 802, in
cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker._do_picking
 File "cryosparc_worker/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py",
line 569, in
cryosparc_compute.jobs.deep_picker.deep_picker_utils.get_dataset_from_particle_centers
ValueError: need more than 1 value to unpack

Please can you post the output of
cryosparcm joblog P3 J338

================= CRYOSPARCW =======  2022-10-11 20:35:02.896383  =========
Project P3 Job J338
Master ultron Port 39002
===========================================================================
========= monitor process now starting main process
MAINPROCESS PID 49931
MAIN PID 49931
deep_picker.run_deep_picker cryosparc_compute.jobs.jobregister
========= monitor process now waiting for main process
2022-10-11 20:35:03.490229: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2022-10-11 20:35:06.775603: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2022-10-11 20:35:06.776847: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2022-10-11 20:35:06.820141: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-11 20:35:06.821761: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: 
pciBusID: 0000:01:00.0 name: NVIDIA GeForce RTX 3090 computeCapability: 8.6
coreClock: 1.695GHz coreCount: 82 deviceMemorySize: 23.70GiB deviceMemoryBandwidth: 871.81GiB/s
2022-10-11 20:35:06.821860: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-11 20:35:06.823413: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 1 with properties: 
pciBusID: 0000:2e:00.0 name: NVIDIA GeForce RTX 3090 computeCapability: 8.6
coreClock: 1.695GHz coreCount: 82 deviceMemorySize: 23.70GiB deviceMemoryBandwidth: 871.81GiB/s
2022-10-11 20:35:06.823505: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-11 20:35:06.825066: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 2 with properties: 
pciBusID: 0000:41:00.0 name: NVIDIA GeForce RTX 3090 computeCapability: 8.6
coreClock: 1.695GHz coreCount: 82 deviceMemorySize: 23.70GiB deviceMemoryBandwidth: 871.81GiB/s
2022-10-11 20:35:06.825156: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-11 20:35:06.826608: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 3 with properties: 
pciBusID: 0000:61:00.0 name: NVIDIA GeForce RTX 3090 computeCapability: 8.6
coreClock: 1.695GHz coreCount: 82 deviceMemorySize: 23.70GiB deviceMemoryBandwidth: 871.81GiB/s
2022-10-11 20:35:06.826643: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2022-10-11 20:35:06.859112: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2022-10-11 20:35:06.859222: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2022-10-11 20:35:06.861524: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2022-10-11 20:35:06.861798: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2022-10-11 20:35:06.934358: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2022-10-11 20:35:06.942670: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2022-10-11 20:35:06.943667: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2022-10-11 20:35:06.943846: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-11 20:35:06.945544: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-11 20:35:06.947159: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-11 20:35:06.948766: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-11 20:35:06.950366: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-11 20:35:06.952983: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-11 20:35:06.954608: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-11 20:35:06.956200: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-11 20:35:06.957735: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0, 1, 2, 3
2022-10-11 20:35:06.959408: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-10-11 20:35:06.966996: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2022-10-11 20:35:06.967145: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-11 20:35:06.968698: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: 
pciBusID: 0000:01:00.0 name: NVIDIA GeForce RTX 3090 computeCapability: 8.6
coreClock: 1.695GHz coreCount: 82 deviceMemorySize: 23.70GiB deviceMemoryBandwidth: 871.81GiB/s
2022-10-11 20:35:06.968745: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2022-10-11 20:35:06.968776: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2022-10-11 20:35:06.968807: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2022-10-11 20:35:06.968830: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2022-10-11 20:35:06.968853: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2022-10-11 20:35:06.968875: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2022-10-11 20:35:06.968910: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2022-10-11 20:35:06.968933: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2022-10-11 20:35:06.969018: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-11 20:35:06.970596: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-11 20:35:06.972122: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2022-10-11 20:35:06.973124: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2022-10-11 20:35:07.854149: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-10-11 20:35:07.854206: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267]      0 
2022-10-11 20:35:07.854222: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0:   N 
2022-10-11 20:35:07.854585: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-11 20:35:07.856140: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-11 20:35:07.857589: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-11 20:35:07.859026: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 128 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:01:00.0, compute capability: 8.6)
WARNING:tensorflow:Error in loading the saved optimizer state. As a result, your model is starting with a freshly initialized optimizer.
========= sending heartbeat
2022-10-11 20:35:13.603642: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
2022-10-11 20:35:13.604514: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2694955000 Hz
2022-10-11 20:35:14.463270: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2022-10-11 20:35:17.989369: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 302.29MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2022-10-11 20:35:17.989454: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 302.29MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2022-10-11 20:35:17.989705: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2022-10-11 20:35:19.083960: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2022-10-11 20:35:19.086953: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 548.13MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2022-10-11 20:35:19.087010: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 548.13MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2022-10-11 20:35:19.092238: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 88.00MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2022-10-11 20:35:19.092333: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 88.00MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2022-10-11 20:35:19.163867: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 592.14MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2022-10-11 20:35:19.163992: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 592.14MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2022-10-11 20:35:19.182351: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 548.16MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2022-10-11 20:35:19.182429: W tensorflow/core/common_runtime/bfc_allocator.cc:248] Allocator (GPU_0_bfc) ran out of memory trying to allocate 548.16MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
***************************************************************
Running job on hostname %s ultron
Allocated Resources :  {'fixed': {'SSD': False}, 'hostname': 'ultron', 'lane': 'default', 'lane_type': 'node', 'license': True, 'licenses_acquired': 1, 'slots': {'CPU': [0], 'GPU': [0], 'RAM': [0]}, 'target': {'cache_path': '/ultron/scratch/cryosparc_cache/', 'cache_quota_mb': None, 'cache_reserve_mb': 10000, 'desc': None, 'gpus': [{'id': 0, 'mem': 25444024320, 'name': 'NVIDIA GeForce RTX 3090'}, {'id': 1, 'mem': 25447170048, 'name': 'NVIDIA GeForce RTX 3090'}, {'id': 2, 'mem': 25447170048, 'name': 'NVIDIA GeForce RTX 3090'}, {'id': 3, 'mem': 25447170048, 'name': 'NVIDIA GeForce RTX 3090'}], 'hostname': 'ultron', 'lane': 'default', 'monitor_port': None, 'name': 'ultron', 'resource_fixed': {'SSD': True}, 'resource_slots': {'CPU': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127], 'GPU': [0, 1, 2, 3], 'RAM': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63]}, 'ssh_str': 'singhpk@ultron', 'title': 'Worker node ultron', 'type': 'node', 'worker_bin_path': '/ultron/software/cryosparc/cryosparc_worker/bin/cryosparcw'}}
**** handle exception rc
set status to failed
========= main process now complete.
========= monitor process now complete.

Hi @wtempel

This issue was also resolved when I use the lower patch size.

Thank you

Prash

I tried that solution but it did not do the trick in my case.

Is there any other option?

Thank you.

License is valid.

Launching job on lane default target cryotem-02 …

Running job on remote worker node hostname cryotem-02
[CPU: 211.6 MB Avail: 124.38 GB]

Job J197 Started
[CPU: 211.6 MB Avail: 124.38 GB]

Master running v4.4.0+231114, worker running v4.4.0+231114
[CPU: 211.9 MB Avail: 124.38 GB]

Working in directory: /nas/cold/cryosparc_projects/CS-katharina-110124/J197
[CPU: 211.9 MB Avail: 124.38 GB]

Running on lane default
[CPU: 211.9 MB Avail: 124.38 GB]

Resources allocated:
[CPU: 211.9 MB Avail: 124.38 GB]

Worker: cryotem-02
[CPU: 211.9 MB Avail: 124.38 GB]

CPU : [0]
[CPU: 211.9 MB Avail: 124.38 GB]

GPU : [0]
[CPU: 211.9 MB Avail: 124.38 GB]

RAM : [0]
[CPU: 211.9 MB Avail: 124.37 GB]

SSD : False
[CPU: 211.9 MB Avail: 124.37 GB]


[CPU: 211.9 MB Avail: 124.37 GB]

Importing job module for job type deep_picker_inference…
[CPU: 531.8 MB Avail: 124.15 GB]

Job ready to run
[CPU: 531.8 MB Avail: 124.15 GB]


[CPU: 547.6 MB Avail: 124.13 GB]

Using TensorFlow version 2.8.4
[CPU: 547.6 MB Avail: 124.13 GB]

Processing micrographs and inferring particles…
[CPU: 547.6 MB Avail: 124.13 GB]

Loading model…
[CPU: 547.6 MB Avail: 124.13 GB]

Loaded model.
[CPU: 547.6 MB Avail: 124.13 GB]

0/4929 micrographs processed.
[CPU: 1.07 GB Avail: 123.80 GB]

Traceback (most recent call last):
File “cryosparc_master/cryosparc_compute/run.py”, line 95, in cryosparc_master.cryosparc_compute.run.main
File “cryosparc_master/cryosparc_compute/jobs/deep_picker/run_deep_picker.py”, line 531, in cryosparc_master.cryosparc_compute.jobs.deep_picker.run_deep_picker.run_deep_picker_inference
File “cryosparc_master/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 830, in cryosparc_master.cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker
File “cryosparc_master/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 835, in cryosparc_master.cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker
File “cryosparc_master/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 755, in cryosparc_master.cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker._do_picking
File “cryosparc_master/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 521, in cryosparc_master.cryosparc_compute.jobs.deep_picker.deep_picker_utils.get_dataset_from_particle_centers

@Dmitry what patch size are you using? I usually use 32, 64, or 128.

1 Like

hello @prash I went even further and used 16.
I will try with 32 and report back

Hello @prash and all,

The patch size 32 for Deep picker (Train) is ok, as always.

But the issue with Deep Picker interference remains -
Please see the error below.

I will try 128 as well and report back.

Any ideas up to now?

Many thanks.

Kind regards,
Dmitry

License is valid.

Launching job on lane default target cryotem-03 …

Running job on remote worker node hostname cryotem-03
[CPU: 210.1 MB Avail: 124.43 GB]

Job J197 Started
[CPU: 210.1 MB Avail: 124.43 GB]

Master running v4.4.0+231114, worker running v4.4.0+231114
[CPU: 210.1 MB Avail: 124.43 GB]

Working in directory: /nas/cold/cryosparc_projects/CS-katharina-110124/J197
[CPU: 210.1 MB Avail: 124.43 GB]

Running on lane default
[CPU: 210.1 MB Avail: 124.43 GB]

Resources allocated:
[CPU: 210.1 MB Avail: 124.43 GB]

Worker: cryotem-03
[CPU: 210.1 MB Avail: 124.43 GB]

CPU : [0]
[CPU: 210.1 MB Avail: 124.43 GB]

GPU : [0]
[CPU: 210.1 MB Avail: 124.43 GB]

RAM : [0]
[CPU: 210.1 MB Avail: 124.43 GB]

SSD : False
[CPU: 210.1 MB Avail: 124.43 GB]


[CPU: 210.1 MB Avail: 124.43 GB]

Importing job module for job type deep_picker_inference…
[CPU: 528.3 MB Avail: 124.20 GB]

Job ready to run
[CPU: 528.3 MB Avail: 124.20 GB]


[CPU: 544.1 MB Avail: 124.18 GB]

Using TensorFlow version 2.8.4
[CPU: 544.1 MB Avail: 124.18 GB]

Processing micrographs and inferring particles…
[CPU: 544.1 MB Avail: 124.18 GB]

Loading model…
[CPU: 544.1 MB Avail: 124.18 GB]

Loaded model.
[CPU: 544.1 MB Avail: 124.18 GB]

0/4929 micrographs processed.
[CPU: 1.06 GB Avail: 123.64 GB]

Traceback (most recent call last):
File “cryosparc_master/cryosparc_compute/run.py”, line 95, in cryosparc_master.cryosparc_compute.run.main
File “cryosparc_master/cryosparc_compute/jobs/deep_picker/run_deep_picker.py”, line 531, in cryosparc_master.cryosparc_compute.jobs.deep_picker.run_deep_picker.run_deep_picker_inference
File “cryosparc_master/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 830, in cryosparc_master.cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker
File “cryosparc_master/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 835, in cryosparc_master.cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker
File “cryosparc_master/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 755, in cryosparc_master.cryosparc_compute.jobs.deep_picker.deep_picker_utils.picker_extract_worker._do_picking
File “cryosparc_master/cryosparc_compute/jobs/deep_picker/deep_picker_utils.py”, line 521, in cryosparc_master.cryosparc_compute.jobs.deep_picker.deep_picker_utils.get_dataset_from_particle_centers
ValueError: need more than 1 value to unpack

I believe, this error has more to do with the software/driver and not your data or processing. Changing patch size is just a workaround. One option would be to update the cryoSPARC and cuda to the latest version. Make sure to backup before any upgrades.

1 Like

Unfortunately, we have not identified the root cause of ValueError: need more than 1 value to unpack . An upgrade to v4.4.1 from an older version of CryoSPARC provides new functionality and bug fixes, but may not fix this particular ValueError in Deep Picker Inference .

1 Like