Incomplete particle subtraction

Hmmm I’m not sure - the other thing you might try is to perform a round of 3D classification in relion without alignments after particle subtraction, with a mask including both domains - in this way you might be able to separate out the particles that have been under subtracted

Also what parameters are you using for particle subtraction? Lowpass filter etc?

Hi @olibclarke,
Sorry for the late reply.
I previously used these commands in Relion:
relion_reconstruct --angpix 1.06 --i from_csparc_P7J258_AC00.star --ctf --o from_csparc_P7J258_reconstruct.mrc
relion_project --i from_csparc_P7J258_reconstruct.mrc --mask cryosparc_P7_J456_Cdommask.mrc --subtract_exp --angpix 1.06 --ctf --ang from_csparc_P7J258_AC00.star --o Ndom_tightmask_AC00

When you mentioned lowpass filter, I realized that the mask used for subtraction was created by taking the localfilter map, erasing the unwanted domain, lowpass filtering to 10A and binarizing in cryosparc.
I now used the volume from NU-refine with Segger to delete the unwanted domain and created a mask without lowpass filtering. This seemed to improve signal subtraction in cryosparc and I have done local refinement using those particles.

Could I please check the following procedure for local refine with you? I am working remotely from home and am unsure if my method, maps and FSC curves are correct.

I have a consensus map from NU refine at 4.4A res
image

After signal subtraction, I performed local refinement in CS which improved both domains (5A initial lowpass of first domain, 7A initial lowpass of second domain (otherwise alignment plots were empty), dynamic masking with 6-18 width, 10deg 10A search domain 1, 25deg 10A search domain 2, gaussian prior enabled):

domain 1
image
domain 2
image

I then ran those particles and volumes through Relion 3D classification w/o align (T=20, K=4) with single-domain masks and separated out a few remaining junk particles and lower res classes. These were again imported to CS and local resolution performed (force-redo split enabled, ini lowpass 5A for both domains, 5deg and 2A search with gaussian prior, dynamic masking 0.2 threshold 6-18 width)

domain 1
image
domain 2
image
image

Does anything look wrong in the method I followed? At 0.2 threshold the glycans are not included in the dynamic mask and using a lower threshold leads to poorer resolution, as expected. How can you tell if overfitting to noise is present?

This is domain 1 with volume_map and volume_mask_refine from local refine which shows the glycans being cut by the refinement mask. Is that alright to do? It does have a soft edge which covers them at the lowest threshold.

When I look at volume_map (left) vs volume_map_sharp (right) from local refine, the helices at the top in this image look stretched after sharpening. How can I improve this? Does it mean that they are of lower resolution than the rest and now over-sharpened?

Hi @lizellelubbe,

It is often worth lowpass filtering the volume you are going to use for subtraction (in this case I would probably filter to ~4.5Å. You will also want to put a soft edge on the masks you are using for both subtraction and refinement - are you doing that?

I notice that you are using dynamic masking in local refinement - in my experience with the new local refinement one gets better results with a user supplied static mask with a very soft edge, have you tried that?

Also I typically use smaller searches with a gaussian prior - it depends on the situation, but for small particles I often find that searching 3deg/1A with priors of 9deg/3A is a good starting point.

I agree that your volumes do look a bit oversharpened and overrefined - you might find refining with a static mask helps with that.

Cheers
Oli

Hi @olibclarke,

Thanks for those suggestions - I really appreciate it. I knew something was wrong but ran out of things to try.

Do you mind explaining what the reasoning would be behind using a 4.5A lowpass of the input volume for subtraction? After the advice from Michael further up in this thread I switched to no lowpass.

My mask for subtraction had no lowpass applied (this was recommended in a video tutorial but I’m not sure if that was correct), dilation of 5 and padding of 10

Thinking about the subtraction mask again, I only based the mask on the protein density and am now wondering whether using such a mask could’ve caused overfitting by causing correlations on the protein surface where the glycans are cut?

I always use a soft edge of 10-12A for my refinement mask but again, don’t know whether cutting the glycans could cause artifacts so have used a low threshold of 0.06. Is it alright to use an initial lowpass just above the consensus resolution for the local refinement input volume?

I have tried static masking with a 10A edge but saw no improvement. Perhaps the problem is still with signal subtraction and I should optimize refinement after repeating that.

I haven’t played around with the gaussian prior settings but will give it a try!

Hi @lizellelubbe,

What Michael said is true in cryoSPARC, but not to my knowledge true in RELION - if you want to apply a lowpass filter to the volume you want to use for subtraction, you will need to do that yourself. I would usually filter to approx the resolution of the region you are subtracting, it generally gives cleaner results. To be clear, I am talking about applying a lowpass filter to the volume you want to project and subtract, not the mask.

Cheers
Oli

Hi @lizellelubbe,

Just another thought - I wonder if the flexibility between the two domains is leading to incomplete particle subtraction (because the orientation of each domain with respect to the reference that you are projecting and subtracting varies from image to image).

Perhaps you might try the following:

  1. After NU-refine, perform local refinements of each domain (let’s call them domain A and domain B).

  2. Perform particle subtraction for domain A, using the particle set from local refinement. Hopefully, subtraction will be more complete, because you are aligning on the region you are going to subtract. However, the domain you now want to refine is now blurred out, with fairly bad starting angles, so let’s fix that:

  3. Perform local refinement using the particles from step 2, but the orientations from NU refine, with a mask around the domain you have not subtracted. You can supply the orientations by replacing the low level results group alignments3D with the one from NU-refine, using the low level results interface (see: https://guide.cryosparc.com/processing-data/tutorials-and-case-studies/job-builder-tutorial#fine-tuned-control-over-individual-results). Basically drag the alignments3D result group in the output section of the NU-refine job onto the corresponding slot in the job builder for your local refinement job (you will neeed to expand the particles group to see it by clicking the little down arrow):

4 Likes

Hi @olibclarke,
This is a great idea thank you so much :slightly_smiling_face:
I was really struggling this week and was almost ready to give up on local refinement! Based on my 3dva results it definitely could be that the hinging is causing incomplete subtraction. I will try your method of local refine before subtraction tomorrow morning. Hopefully that does the trick!

1 Like

Let me know how it goes! :slight_smile:

Also for checking on the results of subtraction, don’t forget there is now a Homogeneous Reconstruction job - so you can directly perform a reconstruction of the subtracted particles (with orientations supplied as described above from the NU-refine job), without having to refine them. The result of this may also be useful as an initial model for your subsequent local refinement.

Cheers
Oli

2 Likes

Oh yes, I completely forgot about the homogenous reconstruction job type. Will definitely use that. Thank you!

Just to check one thing in that workflow, would I use the 2-domain volume from consensus NU refine as reference when doing local refine of a single domain w/o subtraction? Or should it be a single-domain reference?

I would use the two domain volume straight out of the consensus refinement as the starting point for local refinement without subtraction, but it shouldn’t make a huge difference either way

Okay. Just asked because I was using a Segger volume from Extract Densities output before and noticed today that when I lowpass it using a Volume Tools job it actually has outward ringing in real-space slices. Also visible in Chimera as a shell of density around the protein and I was wondering if that was incorrect and I should’ve used the consensus NU volume as reference for refine with subtraction? The reason why I chose the Segger one before was that I didn’t want to have a reference at 5A lowpass with noise eventually being fitted to the reference where the domain was subtracted.

I would use a direct reconstruction from the subtracted particles as your initial model for refining after subtraction - that should be the best initial model I would think, with minimal extra density in the region you have subtracted.

1 Like

I wouldn’t typically use a volume treated like you describe as an initial model - particularly not when you have to use such a minimal lowpass to initialize local refinement. I would worry that any (presumably binary or at least fairly harsh) masking that Segger is doing might cause problems during refinement.

Using a volume generated by Homogeneous Reconstruction from the subtracted particles should be a straightforward way to address that problem.

Cheers
Oli

It definitely looked like Segger was doing something strange today even when I added a soft fall-off to the volume. I will stick to the reconstruction output and use that as initial reference. If the other domain is blurred it probably won’t have much of an effect.

You can use the refinement output, or just use a reconstruction from the subtracted particles, which has the advantage that the other domain will be already subtracted in the initial model.

Ah yes I get it. The consensus refinement as ref for local w/o subtraction to focus the subtraction, then homo reconstruct after subtraction and use that output as ref for local. Sorry it is midnight here and it’s been a long day so I wasn’t thinking clearly :see_no_evil:

1 Like

yep, you got it! :smiley:

HI @olibclarke,

I followed your advice for the particle subtraction and it was a really good idea :slight_smile:
The local refine w/o subtraction gave good alignment of the domain to be subtracted and although CS still did not remove all the density, it was much better than before.

Align of bottom domain w/o subtract:
image
align of top domain w/o subtract:
image

The ptcls, volume and mask from this local refine of the top domain was used for subtraction using default settings in CS.

Reconstruct using subtracted ptcls with align3D from consensus NU refine and mask around bottom domain (a bit of top domain remained):
image

I ran local refine on these ptcls using the reconstruct volume as ini model but got overfitting issues. I have tried to optimize the mask, lowpass, priors, and search range. It seems like I need a wide mask to avoid shells of density from forming around each glycan but this wide mask seems to cause overfitting to noise - maybe because the wide mask included so much solvent?

Local refine with a lowpass 10, dilate 5, pad 16 mask which covered the glycans:


To me, the streaks in these slices looked like overfitting to noise.

I therefore decided to run subtraction on the subtracted particles (after CS local refine) in Relion. I ran one iteration of Class3D w/o align to create an optimiser.star file to use in Relion subtraction (GUI) then ran particle subtraction with a mask around the domain to keep. I created it so that the glycans are mostly excluded but used a softly padded mask. Subsequent reconstruct in CS showed minimal glycan density and I could locally refine the single domain.

Reconstruct output after Relion subtraction (AC0.1):
image

This was local without glycans (3x2 search, 9/3 priors, lowpass ini 5A:

With the protein-only refine after Relion subtraction, I had to use the alignments from localrefine where glycans were present since overwriting the align3D field after importing Relion-subtracted particles gave an error about missing fields. Subtracting glycans using the consensus refinement’s align3D didn’t seem like a good idea since this may cause incomplete glycan subtraction. So if overfitting to noise happened when the glycans were there, it could’ve been carried over. I used the reconstruct volume with initial consensus align3D as reference for protein-only refine so maybe that minimized it. This last protein-only refine looks better to my eyes - I just hope the reviewers will agree one-day :wink:

Thanks for all the help Oli and sorry for the long-winded post!

2 Likes

Glad it helped! agree your last map looks improved - looks like there are some clear secondary structural features there now?

Not a suggestion re cryosparc per se, but in some cases I have found deepEMhancer (https://github.com/rsanchezgarc/deepEMhancer) useful for postprocessing in cases like this. It can be quite effective for improving map isotropy, and removing overfitting streaks. Don’t take the resultant maps as gospel though - it incorporates some kind of dust removal routine which can chew up ligands.

What is the molecular weight of each domain?

If you have plenty of particles, you may also be able to separate out some poorly aligned or subtracted particles by running class3D skipping alignments in relion (I would start with K=6, T=40), starting from your final subtracted particle stack.

Good luck!! :slight_smile:

Cheers
Oli

1 Like