Inspect pick job after Filament Tracer crashes

Dear all,

I’m trying to inspect (everything set to default) the particle picks after a filament tracing job but it readily crashes (see error log below). As for the filament tracing, I used the following settings:

Diameter: 75; Separation: 0.6; Minimum Length: 4; Ridge detection: Sato; Crossing radius: 0.5. Based on the output of the filament tracer, everything appears to be working quite well, i.e. it traces the filaments as would be desired, but the inspect job crashes unfortunately.

[CPU: 238.7 MB]  Traceback (most recent call last):
  File "cryosparc_master/cryosparc_compute/run.py", line 84, in cryosparc_compute.run.main
  File "/home/Software/Cryosparc/cryosparc2_master/cryosparc_compute/jobs/manual_picker/run_inspect_simple.py", line 432, in run_inspect_simple
    ax.hist2d(xvals, yvals, bins=50, range=[[xmin,xmax],[ymin,ymax]])
  File "/home/Software/Cryosparc/cryosparc2_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/site-packages/matplotlib/__init__.py", line 1438, in inner
    return func(ax, *map(sanitize_sequence, args), **kwargs)
  File "/home/Software/Cryosparc/cryosparc2_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/site-packages/matplotlib/axes/_axes.py", line 6915, in hist2d
    density=density, weights=weights)
  File "<__array_function__ internals>", line 6, in histogram2d
  File "/home/Software/Cryosparc/cryosparc2_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/site-packages/numpy/lib/twodim_base.py", line 713, in histogram2d
    hist, edges = histogramdd([x, y], bins, range, normed, weights, density)
  File "<__array_function__ internals>", line 6, in histogramdd
  File "/home/Software/Cryosparc/cryosparc2_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/site-packages/numpy/lib/histograms.py", line 1049, in histogramdd
    smin, smax = _get_outer_edges(sample[:,i], range[i])
  File "/home/Software/Cryosparc/cryosparc2_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/site-packages/numpy/lib/histograms.py", line 316, in _get_outer_edges
    "supplied range of [{}, {}] is not finite".format(first_edge, last_edge))
ValueError: supplied range of [nan, nan] is not finite

Hello @msleutel,

Thank you for the post and the feedback! This is indeed a bug, and is caused by invalid values written to the particle dataset during picking.

To help us debug, could I ask you to also post the job log file for the filament tracer job? To do this, you can first find to the project directory by navigating to the project main page, clicking on the “Details” button, and scrolling down to the directory field (for example, see the screenshot below displaying the directory for Project P107). Once you have the project directory, the job directory will be a subdirectory within this, named with the title of the job (e.g. if the filament tracer job was J10, then it would be located at /u/cryosparcdev/cryosparc2_projects/P107/J10/). When you have navigated to this directory on the command line, you can enter cat job.log and this will print out the outputs of the job log, which you can then copy and paste here. Alternatively you can direct message these to me if you would prefer.

Screen Shot 2021-01-04 at 3.11.15 PM

Thank you and best regards,
Michael

Hello @msleutel,

Apologies for the double message. If you have time, would you also be able to run a regular “Template picker” job using the same inputs (i.e. templates and micrographs) as you did with the Filament tracer job, and then connect the outputs of the Template picker job to a new Inspect picks job? If you receive the same error, this may indicate a problem with the data; if the error is gone, this indicates that the problem is specific to the filament tracing procedure.

Thank you and best regards,
Michael

hello @mmclean, we have the same error here. template picker works, but inspect pick job crashes after filament tracer. Could you help with this? Thank you.

Hi @jsun,

Thanks for reporting; could I ask which cryoSPARC version you are running? As well, would you be able to provide the job log for the filament tracer job? (See my first reply above, from Jan 4, for instructions)

Best,
Michael

hi @mmclean,

The version is 3.1.0. here is the log for the filament tracer

================= CRYOSPARCW ======= 2021-03-08 10:14:12.207718 =========
Project P2 Job J24
Master ****.local Port 39002

========= monitor process now starting main process
MAINPROCESS PID 41325
========= monitor process now waiting for main process
MAIN PID 41325
template_picker_gpu.run cryosparc_compute.jobs.jobregister
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat

… more lines of sending heartbeat

========= sending heartbeat
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat


Running job on hostname %s .local
Allocated Resources : {‘fixed’: {‘SSD’: False}, ‘hostname’: '
.local’, ‘lane’: ‘default’, ‘lane_type’: ‘default’, ‘license’: True, ‘licenses_acquired’: 1, ‘slots’: {‘CPU’: [0, 1, 2, 3], ‘GPU’: [0], ‘RAM’: [0]}, ‘target’: {‘cache_path’: ‘/data1/cryosparc’, ‘cache_quota_mb’: None, ‘cache_reserve_mb’: 10000, ‘desc’: None, ‘hostname’: '
.local’, ‘lane’: ‘default’, ‘monitor_port’: None, ‘name’: '.local’, ‘resource_fixed’: {‘SSD’: True}, ‘resource_slots’: {‘CPU’: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47], ‘GPU’: [0, 1, 2, 3], ‘RAM’: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]}, ‘ssh_str’: 'username@*.local’, ‘title’: ‘Worker node ****.local’, ‘type’: ‘node’, ‘worker_bin_path’: ‘/usr/local/cryosparc/cryosparc_worker/bin/cryosparcw’}}
min: -4.978036 max: 5.288628
min: -4.894910 max: 5.182270
min: -4.597814 max: 4.882528
min: -4.664134 max: 4.951081
min: -9.844694 max: 12.914535
min: -99.609869 max: 148.727312
min: -6.507767 max: 8.532213
min: -61.835682 max: 94.226814
min: -9.032718 max: 11.849648
min: -11.073510 max: 14.519047
min: -9.569130 max: 12.550244
min: -10.849735 max: 14.226255
min: -5.258093 max: 6.874591
min: -6.146999 max: 8.056207
min: -6.334277 max: 8.305174
min: -9.575530 max: 12.559482


/usr/local/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/multiprocessing/process.py:99: RuntimeWarning: divide by zero encountered in double_scalars
self._target(*self._args, **self._kwargs)
/usr/local/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/multiprocessing/process.py:99: RuntimeWarning: divide by zero encountered in double_scalars
self._target(*self._args, **self._kwargs)
/usr/local/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/multiprocessing/process.py:99: RuntimeWarning: divide by zero encountered in double_scalars
self._target(*self._args, **self._kwargs)
/usr/local/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/multiprocessing/process.py:99: RuntimeWarning: divide by zero encountered in double_scalars
self._target(*self._args, **self._kwargs)
/usr/local/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/multiprocessing/process.py:99: RuntimeWarning: divide by zero encountered in double_scalars
self._target(*self._args, **self._kwargs)
/usr/local/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/multiprocessing/process.py:99: RuntimeWarning: divide by zero encountered in double_scalars
self._target(*self._args, **self._kwargs)
/usr/local/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/multiprocessing/process.py:99: RuntimeWarning: divide by zero encountered in double_scalars
self._target(*self._args, **self._kwargs)
/usr/local/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/multiprocessing/process.py:99: RuntimeWarning: divide by zero encountered in double_scalars
self._target(*self._args, **self._kwargs)
/usr/local/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/multiprocessing/process.py:99: RuntimeWarning: divide by zero encountered in double_scalars
self._target(*self._args, **self._kwargs)
/usr/local/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/multiprocessing/process.py:99: RuntimeWarning: divide by zero encountered in double_scalars
self._target(*self._args, **self._kwargs)
========= main process now complete.
========= monitor process now complete.

Hi @jsun,

Thank you for the job log – I have followed up further over DM.

Edit: For posterity, please refer to the comment here for a potential workaround to this issue.

Edit 2: We have included a fix for this issue in Patch 210615 for cryoSPARC v3.2.0. Please refer to the linked forum post for instructions on how to apply the patch.

Best,
Michael

1 Like