When linking particles output from queued jobs, subsequent jobs fail with the following error message:
Traceback (most recent call last): File “cryosparc_master/cryosparc_compute/run.py”, line 62, in cryosparc_master.cryosparc_compute.run.main File “/cryosoft/cryosparc/cryosparc_worker/cryosparc_compute/jobs/runcommon.py”, line 1117, in check_default_inputs assert False, 'Non-optional inputs from the following input groups and their slots are not connected: ’ + missing_inputs + ‘. Please connect all required inputs.’ AssertionError: Non-optional inputs from the following input groups and their slots are not connected: particles.blob. Please connect all required inputs.
Once the jobs are completed, linking the particles output to subsequent jobs works fine.
Thanks for reporting! Can you specify from which jobs you tried to connect to/from? Was this run as part of a workflow or manually via drag-and-drop, quick actions or cart?
It happens for manual drag-and-drop linkage. For me, it happened after linking particles from NU-refine to other job types like global CTF refine, another NU-refine, or heterogeneous refinement.
To add, has happened to me. Most often output from Exposure splits (by beam tilt) and Global CTF after Exposure splits (by beam tilt) going into an NU Refine.
We’ve tried to reproduce this without success internally. For example, given J1, a NU Refine job with particles connected (including particles.blob), it’s possible to queue J1 and while J1 is still in ‘queued’ status connect the particle output of J1 into another job such as NU Refine, J2, successfully.
Yes, there is no problem in connecting the particles.blob.
However, once J1 (in your example) starts to run and subsequently finishes, J2 (which is waiting for J1 to finish) fails on initialization with the reported error message.
At this point, if you make the connection again from J1 to J2, J2 runs OK.
So the problem is to connect particles from a queued job, and it appears only on the initialization step.
take J100, do homogeneous reconstruction (say, J101): succeeds
take J100, do exposure group utilities to split optics groups (say, J102): succeeds
take J102 as particle input, J101 for reference and mask for Global CTF (say, J103): fails with particles error
Even if J102 has completed successfully, if J103 is only cleared/restarted, it will error again immediately complaining about particles.
If J103 is cleared, J102 is removed from particle input and re-added (after J102 has completed successfully), it will succeed.
I haven’t tried whether it will happen if the exposure split is “split” into one group; the minimum I usually do is 9, occasionally as high as 121 if I’ve really pushed the beam shift acquisition.