I wondered if it is possible to back-project 2D classes onto their original micrographs (or, better, denoised micrographs), as has been done using the “in silico reconstitution” approach described by the Costa lab (https://www.sciencedirect.com/science/article/pii/S0959440X21001652). This would be a fantastic tool to interpret original EM denisty for highly flexible/heterogenous samples.
I like to think about it in the same way how subtomogram averages can be placed back into a tomogram to annotate the original data with a high signal to noise average (but in 2D instead of 3D). The goal would be to get insights into very flexible or heterogenous samples. For example, in the original publication by the Costa lab, this was used to visualize the orientations of two DNA helicases that are linked by a flexible DNA segment - each helicase could be refined separately to high resolution, but the entire assembly could not be resolved by SPA. Another application would be a scenario where you have different species (complexes, confirmations…) in a dataset that bind to the same structure (let’s say, a vesicle) and ask if different species mix randomly or form homotypic clusters.
you might be able to get what you need quite simply using “inspect particle picks” and providing subsets of particles. Get a subset of 10 micrographs and the picks of interest from them. The job will track and label the particles provided back to the micrograph. Provide different selections of particles (say from unique 2D classes) and run the job mulitple times. Then overlay the outputs with semi-transparency to see particle picks/type A and location relative to particle picks/type B
Hi, I am not sure I understand. The goal is to use the determined rotations of each particle and apply them in inverser to the 2D classes, to get a high signal-to-noise single particle image in the context of the micrograph. The inspect particle pick job only shows the locations using a circle, or am I missing anything ?
thanks a lot. In case you have a pseudo-code for the workflow in mind I could try it to get it done using ryosparc-tools, but not sure I can put it together on my own.
Thanks for the help and tracking the request!
Best,
Matthias
No, your request is more complex than my suggestion. Cryosparc-tools seems right.
If you know the high-resolution 2D (because you select only one view of 2D class), and you know the locations of that 2D in a micrograph from the circles, and then separately also know the high resolution 2D of other particles of a different view (because you select only this other view of 2D class), and then you can know the locations of those in the micrograph from circles on a new job, then you can overlay the two pieces of info with transparency to show whether view1 and view2 are near each other. But I think you also want to know x,y rotations of the particles to be able to tell in the micrograph what direction they’re facing and for that I don’t have an answer. Would be possible with 2d no-alignment in theory but that isn’t available.
If one used projections, you could probably use the same code to generate signal subtracted micrographs
Which would be useful both for diagnostics (are the particle orientations/scales/defoci correct) and for analysis of complex mixtures (repicking after removing signal of dominant species)
@rposert is it possible to use CS tools to output modified micrographs?
It is certainly possible (micrographs are just arrays, after all). It would be a significant undertaking to make sure that all of the various background subtractions, normalizations, and pixel sizes were properly taken into account. You would also want to apply the CTF to your projections/class averages, and then decide how far out you want to subtract delocalized signal. You may also want to do this only for particles for which your classification and pose are very confident…etc…
Once you have made your modified micrographs, you could write them out with (say) mrcfile and import them as usual. From there, you could pick fresh or use Reassign Particles to Micrographs to use your existing picks.
I’m not sure I, personally, would take this approach without a very good reason.
Thanks for the reply! I managed to put some scripts together to backproject 2D classes onto denoised micrographs in relion, using the starparser python package and relion_particle_reposition (relion/src/apps/particle_reposition.cpp at master · 3dem/relion · GitHub). For this, I exported star files from cryosparc refinements, and then performed 2D classification without image alignment in RELION to get the projections. In case anyone is interested I am happy to share my scripts.