Denoiser (or Patch Motion) should output training mics as a separate slot

This would allow one to make sure that they are representative of the dataset (and don’t, for example, include a lot of high contrast junk). Currently I don’t think there is a way to check which micrographs are put aside for training the denoiser? Are they chosen randomly, or just the first e.g. 200?

2 Likes

I’d like to second this (albeit a bit late!)

Hi @olibclarke and @rbs_sci,

Thanks for the feedback! By default, 100 micrographs are chosen randomly for denoiser training. If you wanted, you could query the micrograph dataset based on the field micrograph_blob_non_dw_AB using cs-tools to see which micrographs were used for training. We have found that inclusion of very poor micrographs still produces a good denoising model and we do not expect that the 100 micrographs chosen would include sufficient junk to really throw off the results of micrograph denoising.

In any case, if you wanted to select the mics for denoiser training, you could use curate exposures to select the mics you would like to use, rerun them through Patch Motion Correction to generate the A/B mics for training, and then use the mics from preprocessing and replace the micrograph_blob_non_dw_AB LLI with the new one from the curated mics. This will allow you to control what the denoiser is trained on. We would be curious to see if your results differ significantly from the 100 random micrographs.

Best,

Kye

2 Likes

Thanks @kstachowski :slight_smile:

Right now I’ve got the first dataset I’ve ever had where the pre-trained denoiser does a better job than a self-trained one… I have no time to experiment further with this right now, but want to in future.

1 Like