Hi,
seems like I have the same issue, everytime a user cleans a certain project, I get a stuck progress bar…
Here is the last portion of the requested logs. J47 was a Local Resolution job. J48 does not exist in this workspace.
[EXPORT_JOB] : Writing 17 database images to [SOMEPATH]/P39/J47/gridfs_data/gridfsdata_0
[EXPORT_JOB] : Done. Exported 17 images in 0.13s
[EXPORT_JOB] : Exporting all job's streamlog events...
[EXPORT_JOB] : Done. Exported 1 files in 0.00s
[EXPORT_JOB] : Exporting job metafile...
[EXPORT_JOB] : Creating .csg file for volume
[EXPORT_JOB] : Done. Exported in 0.02s
[EXPORT_JOB] : Updating job manifest...
[EXPORT_JOB] : Done. Updated in 0.00s
[EXPORT_JOB] : Exported P39 J47 in 0.17s
[POST-RESPONSE-THREAD ERROR 2020-10-28 09:25:17.675362 at clear_intermediate_results_run ]
-----------------------------------------------------
Traceback (most recent call last):
File "cryosparc2_command/command_core/__init__.py", line 146, in run
self.target(*self.args)
File "cryosparc2_command/command_core/__init__.py", line 4135, in clear_intermediate_results_run
rc.clear_intermediate_results(project_uid, job['uid'], always_keep_final=True)
File "cryosparc2_compute/jobs/runcommon.py", line 625, in clear_intermediate_results
output_result = com.query(job['output_results'], lambda r: r['group_name'] == slot['group_name'] and r['name'] == slot['result_name'] , error="No match for %s.%s" % (slot['group_name'], slot['result_name']))
File "cryosparc2_compute/jobs/common.py", line 357, in query
assert res != default, error
AssertionError: No match for imported_particles.blob
-----------------------------------------------------
**custom thread exception hook caught something
**** handle exception rc
Exception in thread Thread-18752:
Traceback (most recent call last):
File "/opt/cryosparc_cluster/cryosparc2_master/deps/anaconda/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "cryosparc2_compute/jobs/runcommon.py", line 1689, in run_with_except_hook
handle_exception(*sys.exc_info())
File "cryosparc2_compute/jobs/runcommon.py", line 1653, in handle_exception
set_job_status('failed')
File "cryosparc2_compute/jobs/runcommon.py", line 139, in set_job_status
cli.set_job_status(_project_uid, _job_uid, status)
AttributeError: 'NoneType' object has no attribute 'set_job_status'
set status to failed
Since this is easily reproducible in this case, let me know if you need more info.