I often want to provide multiple inputs to a single low level input slot. For instance, I’ll extract unbinned and then for NU-refinement job want to replace with the CTF slot of all unbinned extracted particles with particle sets from other jobs. As it stands, we can only replace (extraction job CTF) with a single other slot (output of 1 class of 3D for instance).
Hi @CryoEM2,
Are you referring to a workflow where you would:
- Extract at full apix (set A) and downsampled (set B)
- Classify set B and refine each class independently
- Return to set A, replacing poses/CTF info with each classes information determined in step 2 (ie needing to input the poses for particles in class 1, class2, etc into a single job)?
Thanks,
Kye
yes.
I had tried set B but inserting CTF or blobs from set A (which to me is much more intuitive - blob/locations/alignments all from downsampled “progress” but CTF swapped in from full apix) but this does not work.
Similarly, returning to setA, replacing any slot besides CTF does not work.
So routinely I return to setA and replace only the CTF slot to limit the job to the number (and identity) of particles from setB.
I should mention that there are several workarounds by running more jobs, but this points towards a simple feature request - either as above with allowing multiple additions to a low-level slot, OR a wholesale change to use setB particles (with all information) but with setA pixel/extraction. In the latter case, many setB can be given to a single NU-refinement and each of them given the setA treatment in a single slot.
Hi @CryoEM2,
In the latter case, many setB can be given to a single NU-refinement and each of them given the setA treatment in a single slot.
This is possible by connecting multiple particle sets (eg. setB.1, setB.2, setB.3, etc) to a job that takes multiple particle inputs (eg NU-ref) and then on the LLI interface, replace the input slot particle/blob of each connected particle set (eg. setB.1) with the particle.blob output of setA. This will then use ALL of the setB.x metadata and the full size particles from setA, but only the particles where metadata and particle.blob are common between setA and setB.x. It is possible that through processing of the setB.x particle sets, that the alignments of common classes might differ so I would recommend using a workflow where before doing the above step, you perform align3D with the common classes to ensure everything is properly aligned.
Depending at which stage of processing you are at, you might consider beginning your processing with only extracting particle setB, fully processing and filtering out junk particles and establishing various classes of good particles, and then re-extracting just the good sets of particles at the full size. This offers a few benefits like removing duplicate particles that converge from initial picks, better particle centering, and less overhead in terms of data storage.
Best,
Kye
that is what I want, and thought was not possible. I will update when there are resources where I can quickly test. I thought I had tried that, and although the number of particles were correct (operate on only those which intersect), the pixel was stuck at the setB size. will report back soon.
Adding setB (binned) particles from classification and swapping the “particle.blob” from setA (unbinned extraction of too many particles) works well. I would have sworn it was the first, most obvious thing I tried, but alas. Thank you!
Awesome, i’m glad you got it working!