Job failed when compiling date in the final step

Hi all,

I have been having some odd failures using cryosparc V4.1.1. I have the following error messages:
+++++++Heterorefinement:

Traceback (most recent call last):
  File "/app/cryosparc_master/cryosparc_compute/jobs/runcommon.py", line 2057, in run_with_except_hook
    run_old(*args, **kw)
  File "/app/cryosparc_master/cryosparc_command/commandcommon.py", line 78, in run
    self.target(*self.args)
  File "/app/cryosparc_master/cryosparc_command/command_core/__init__.py", line 6920, in set_job_passthrough_check_outputs_and_completed_run
    rc.passthrough_outputs()
  File "/app/cryosparc_master/cryosparc_compute/jobs/runcommon.py", line 1072, in passthrough_outputs
    dset = load_input_group(input_group_name, passthrough_result_names, allow_passthrough=True, memoize=True)
  File "/app/cryosparc_master/cryosparc_compute/jobs/runcommon.py", line 664, in load_input_group
    dsets = [load_input_connection_slots(input_group_name, keep_slot_names, idx, allow_passthrough=allow_passthrough, memoize=memoize) for idx in range(num_connections)]
  File "/app/cryosparc_master/cryosparc_compute/jobs/runcommon.py", line 664, in <listcomp>
    dsets = [load_input_connection_slots(input_group_name, keep_slot_names, idx, allow_passthrough=allow_passthrough, memoize=memoize) for idx in range(num_connections)]
  File "/app/cryosparc_master/cryosparc_compute/jobs/runcommon.py", line 635, in load_input_connection_slots
    dsets = [load_input_connection_single_slot(input_group_name, slot_name, connection_idx, allow_passthrough=allow_passthrough, memoize=memoize) for slot_name in slot_names]
  File "/app/cryosparc_master/cryosparc_compute/jobs/runcommon.py", line 635, in <listcomp>
    dsets = [load_input_connection_single_slot(input_group_name, slot_name, connection_idx, allow_passthrough=allow_passthrough, memoize=memoize) for slot_name in slot_names]
  File "/app/cryosparc_master/cryosparc_compute/jobs/runcommon.py", line 625, in load_input_connection_single_slot
    output_result = com.query(otherjob['output_results'], lambda r : r['group_name'] == slotconnection['group_name'] and r['name'] == slotconnection['result_name'], error='No match for %s.%s in job %s' % (slotconnection['group_name'], slotconnection['result_name'], job['uid']))
  File "/app/cryosparc_master/cryosparc_compute/jobs/common.py", line 679, in query
    assert res != default, error
AssertionError: No match for particles_selected.ml_properties in job J249

++++2D classification

Traceback (most recent call last):
  File "/app/cryosparc_master/cryosparc_compute/jobs/runcommon.py", line 2057, in run_with_except_hook
    run_old(*args, **kw)
  File "/app/cryosparc_master/cryosparc_command/commandcommon.py", line 78, in run
    self.target(*self.args)
  File "/app/cryosparc_master/cryosparc_command/command_core/__init__.py", line 6920, in set_job_passthrough_check_outputs_and_completed_run
    rc.passthrough_outputs()
  File "/app/cryosparc_master/cryosparc_compute/jobs/runcommon.py", line 1072, in passthrough_outputs
    dset = load_input_group(input_group_name, passthrough_result_names, allow_passthrough=True, memoize=True)
  File "/app/cryosparc_master/cryosparc_compute/jobs/runcommon.py", line 664, in load_input_group
    dsets = [load_input_connection_slots(input_group_name, keep_slot_names, idx, allow_passthrough=allow_passthrough, memoize=memoize) for idx in range(num_connections)]
  File "/app/cryosparc_master/cryosparc_compute/jobs/runcommon.py", line 664, in <listcomp>
    dsets = [load_input_connection_slots(input_group_name, keep_slot_names, idx, allow_passthrough=allow_passthrough, memoize=memoize) for idx in range(num_connections)]
  File "/app/cryosparc_master/cryosparc_compute/jobs/runcommon.py", line 635, in load_input_connection_slots
    dsets = [load_input_connection_single_slot(input_group_name, slot_name, connection_idx, allow_passthrough=allow_passthrough, memoize=memoize) for slot_name in slot_names]
  File "/app/cryosparc_master/cryosparc_compute/jobs/runcommon.py", line 635, in <listcomp>
    dsets = [load_input_connection_single_slot(input_group_name, slot_name, connection_idx, allow_passthrough=allow_passthrough, memoize=memoize) for slot_name in slot_names]
  File "/app/cryosparc_master/cryosparc_compute/jobs/runcommon.py", line 625, in load_input_connection_single_slot
    output_result = com.query(otherjob['output_results'], lambda r : r['group_name'] == slotconnection['group_name'] and r['name'] == slotconnection['result_name'], error='No match for %s.%s in job %s' % (slotconnection['group_name'], slotconnection['result_name'], job['uid']))
  File "/app/cryosparc_master/cryosparc_compute/jobs/common.py", line 679, in query
    assert res != default, error
AssertionError: No match for particles.ml_properties in job J247

I restarted 2D, and it worked out fine, no error. I wonder if there is any bug that is causing this issue. Because this is frustrating since it generally takes a long time for 2D classification or hetero-refinement to finish. Any insights would be helpful.

Best,
Runrun

Welcome to the forum @RunrunWU.

Please can you describe (sequence and job types) the chain of jobs upstream of 2D classification, up to and including particle picking?
Have any of the upstream jobs been modified, such as cleared, rerun, etc. from their original form?

Interesting. What specific steps/procedure, such as clear, re-run, or re-build from scratch, did you follow for restarting this job?

Per restarting 2D classifications. I re-built the job and restart 2D using exactly the same parameter, and it is able to finish.

Heterorefinement was started directly after selecting 2D classifications. And it was running fine until the last moment to compile the files.

Thanks for these clarifications. The upstream information would also be helpful.

The upstream jobs are patch motion correction, patch CFT, Topaz extract, and extract mics, and 2D classification.

  1. Please can you post a screenshot of the workspace’s tree view that encompasses jobs between and including the particle picking and failed 2D classification job.
  2. Was there a Deep Picker job involved?
  3. Which, if any, of these jobs have been modified (cleared, re-run, etc.) after their original run?

Just wondering, if there is another way to force this job to finish and output particles even if it doesnt have particles.ml_properties available?

I am not sure. We recommend that you create a new Heterogeneous Refinement job an connect that new jobs inputs manually. See AssertionError: No match for particles.alignments_class_0 in job J80 - #4 by nfrasser and subsequent discussion of a similar problem.