Patch motion continue failed job

So my patch motion job died due to lack of space on HD. It was 99.5% finished with nearly 6k micrographs and took 24 hours to get there. Is there any way to salvage the job and continue with unifinished mics or just continue processing the finished mics? My goal is to come back to local motion.

Thanks!

@user123
If you are on v2.12+ (eg. v2.12.4) you can easily do this by clicking on the job card, and on the details pane, click “Mark job as complete”. This will mark the failed job as complete. Then you can open up the job, and you will see that there are two outputs: micrographs and micrographs_incomplete. You can just take the micrographs_incomplete and connect it to a new patch motion job, set the parameters and run it. Then you can take the micrographs output from the first and second jobs, and connect them both together to your next job.

1 Like

Hi,

This will be very helpful. There is an error when I mark the job as complete. Here is the original error and the new error. May be important that another user cleared intermediate results for this directory yesterday, so it was after the “job is unresponsive” error and before the “ValueError.”

-- 0.0: processing 5668 of 5700: J759/imported/movie.frames.tif
        loading /mnt/DATA/cryosparc_database/project/P13/J759/imported/movie.frames.tif
        Loading raw movie data from J759/imported/movie.frames.tif ...
Job is unresponsive - no heartbeat received in 30 seconds.

(Out of space, then mark job as complete)

Cleared 0 bytes by deleting 0 items in 0.42s
Finalizing Job...
Passing through outputs for output group micrographs from input group movies
Traceback (most recent call last):
  File "cryosparc2_compute/jobs/runcommon.py", line 1490, in run_with_except_hook
    run_old(*args, **kw)
  File "cryosparc2_command/command_core/__init__.py", line 145, in run
    self.target(*self.args)
  File "cryosparc2_command/command_core/__init__.py", line 4145, in set_job_passthrough_check_outputs_and_completed_run
    rc.passthrough_outputs()
  File "cryosparc2_compute/jobs/runcommon.py", line 726, in passthrough_outputs
    output_dset = load_output_group_direct(_project_uid, _job_uid, output_group_name, outputted_result_names, outputted_result_names)
  File "cryosparc2_compute/jobs/runcommon.py", line 356, in load_output_group_direct
    d = load_output_result_dset(project_uid, output_result, version, slot_name)
  File "cryosparc2_compute/jobs/runcommon.py", line 330, in load_output_result_dset
    d.from_file(abspath)
  File "cryosparc2_compute/dataset.py", line 444, in from_file
    indata = n.load(file, allow_pickle=False)
  File "/home/lab/software/cryosparc2/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/numpy/lib/npyio.py", line 406, in load
    pickle_kwargs=pickle_kwargs)
  File "/home/lab/software/cryosparc2/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/numpy/lib/format.py", line 672, in read_array
    array.shape = shape
ValueError: total size of new array must be unchanged

Hi @user123,

Marking the job as completed will unfortunately only work for this situation if you ran this job in cryoSPARC v2.12+. We suspect the error is related to the disk running out of space- the file may have been only partially written, which would’ve caused this “total size of new array must be unchanged” error. It seems like your best bet is to update and re-run this job.

Thanks, I ran the job again.