Refinement strateges (density for expression tag appears in half particles)

Hi all,

I have got a 4.02A volume (so called V1, 61k particles) from non-uniform refinement and the map looks great.

I still tried an ab-initio reconstruction of the 61k particles followed by hetergeneous refinement to 2 volumes; and got one volume (V2, 4.36A, 29k particles) similar to V1 and another volume (V3, 4.29A, 32k particles) which is similar to V1 and V2 in common area with an extra blob density for the expression tag. I ran a 3D map alignment job (input V1, V2, V3); and the difference map shows the major difference among 3 maps is the extra density for the tag in V3.

What is a better stratege to refine these maps? I am not interested in the flexible tag, but it is in my protein construct. Since the common part of V2 and V3 are the same, I tempted to process them together. Would the extra tag density in V3 afffect the refinement V1 map? Do I need to mask the extra tag density for nonuniform refinment of V1 volume?

This is my first good SPA data set and still at the beginning of the learning journey. Any advice is appreciated!

Thanks very much.

1 Like

Hi @yy314,

If I understand correctly, I think this is a very common scenario. Usually when there is heterogeneity due to association/dissociation of something (tag, subunit, fab, etc), but you are not interested in the part that is partially present (tag), then the best strategy is to combine all particles. This maximizes the signal available for reconstruction of the common parts that you do care about.
Then caveats are:

  • If the association/dissociation is coupled with a conformational change of the common part, then it may be worth to keep the particles separate since if you combine them, you will be averaging the two conformations. This is probably not a substantial effect with the tag though
  • If the molecular weight of the part you don’t care about (tag, fab, etc) is similar or greater than the part you do care about, then it may be better to keep the particles separate, do a refinement of both sets, then do signal subtraction of the tag in one set, then combine the particles and do a refinement or local refinement. This is because the extra density in this case (where there’s substantial mass) could make it hard for all the particles (with tag and without tag) to be aligned to the same reference structure. The extra density could mess up the alignments if aligning with-tag particles to a no-tag reference. But this would be a rare case, eg if you are working with very small proteins to begin with.

Once you combine the particles, you will have to empirically determine what is the best strategy. It sounds like you have already tried the simplest approach which is to just throw all the particles into a non-uniform refinement. You may also after this wish to try creating a mask that excludes the tag, and then do a local refinement using that mask and all the particles from the non-uniform refinement, in case this slightly helps alignments. But you have to try - it’s hard to say whether it will help or not.

1 Like

@apunjani Thanks very much for your comprehensive advice.

I only found out these two classes after a few rounds of ab-inito and hetergerous refinement to throw away other ‘rubbish’ particles. My tag is smaller than other part (150kD for the other part of protein and 40kD for the tag.) Anyway, I shall do a signal subtraction and local refinement and see. Does local refinement work better when the masked area is relatively bigger?

Hi @yy314,

Yes, local refinement works best when the masked area is relatively large (>= 150 kDa) and can be reasonably approximated as a “rigid” unit. In this case, considering that the region of interest is 150 kDa and the tag is only 40 kDa, using local refinement and masking only the 150 kDa region would be a good strategy.

Just a note on signal subtraction – it is often not an explicitly necessary step, as often times the signal that would be subtracted can be treated as “noise” (this is especially true if the region that you would subtract is only 40 kDa, it likely has minimal effect on the alignment given that it is less than a third of the mass of the region of interest). Before doing signal subtraction, you may want to simply try doing a local refinement using the particles/volume directly from your initial V1 non-uniform refinement, along with a mask covering the 150 kDa region. More information on how to best use local refinement can be found in our case study, which details how to create masks using UCSF Chimera, etc.

Best,
Michael

@apunjani @mmclean

Thanks very much for your advice.

Here is some feedback after trying the signal subtraction and local refinment.

My tag is smaller than other part (150kD for the other part of protein and 40kD for the tag.) I should have mentioned that the tag density is very weak which could be seen when the level of V1 and V3 is set to much lower than the optimal view in Chimera.

My starting point:

  1. a whole set of particles (61k, so called P1) and its nonuniform refinement map(4.02A, so called V1) ;
  2. one subset of particle (29k, so called P2) and its nonuniform refinement map (4.36A, so called V2 );
  3. another subset of particles (32k, so called P3) with an extra blob as the tag density and its nonuniform refinement map (4.29A, so called V3).
    P2 and P3 are subsets of P1, generated by heterogeneous refinement. The common areas of V1,V2 and V3 look very similar.

I made two masks in Chimera, one bigger mask (M1, covers 150kD core area) is the whole protein except the tag and a small mask (M2, covers 40kD tag) for the tag; imported two masks to CS and adjusted the threshold and dilation radius (set as 5). The solid masks looked like what I wanted.

I did local refinement of V1/P1/mask M1; the resolution got worse dramatically to around 7A.
image
image

Then I did ‘particle subtraction’ with input of P1/V1/mask M2 or P3/V3/mask M2, followed by local refinement using subtracted particles and applied mask M1 (other settings were default). And the tag density indeed has been subtracted or masked. But the global resolution became lower by about 0.5A for each. Here are the graphs for the subtracted whole particle set.
image
image

I also did nonuniform refinement of V1/subtracted P1 or V3/subtracted P3; the resolution got lower by around 0.3. The tag density has been subtracted indeed.

It looks to me that neither masking nor signal subtraction helped with my case. But I didn’t try to tune the parameters for lcoal refinement either. Would you recommend to try again with tuning any parameters?

Apart from local refinement, I also tried to do a local CTF refinement (input: V1/P1) followed by a nonuniform refinement (input:CTF refined volume/P1) and the reslution is quite similar with 0.05A lower.

Is there other refinement in CS I could try to improve the resolution before sharpening the map for model buidling?

I ran a ResLog job for the V1/P1 job with the top graph shown as below. Does this mean collecting more data would not help (looking at the purple corrected line)?

Thanks again for your time!

Hi @yy314,

Local refinement can be a challenging workflow to get right, as often times parameters need specific tuning. My general suggestions are the following (this can apply to both the unsubtracted and subtracted cases):

  • With smaller proteins, generally softer masks help to reduce overfitting. With this protein, you may want to try re-creating the mask at a different softness level. This can be done in the Volume Tools job by passing in the current mask used (M1), setting the “Threshold” parameter to 1 and setting the “Soft padding width” to something around 8. This will take the input mask and add an 8 pixel soft edge to the mask, and this mask may give better results when running local refinements.
  • When you have the softer mask connected to a new local refinement job, you should also set the “Mask (dynamic, static)” parameter to “static”, which makes sure that the mask is not modified at each iteration. Generally, we suggest using static masks for local refinement
    • Alternatively, if you still want to use a “dynamic” mask, you may want to try increasing the “Dynamic mask far” parameter. I have found success with keeping the near parameter at 3 Å, but increasing the far parameter to 12 Å. This will mean that each iteration, a mask will be re-generated with a softness of 12 - 3 = 9 Å.
  • Using non-uniform refinement within local refinement can be helpful (this can be enabled by activating the Non-Uniform Refinement parameter switch)
  • You may also want to try increasing the rotation/shift search extents; particularly, the shift search extent seems too small for both jobs based on the histogram plots you’ve shown. Perhaps doubling it to 6 or even further would help.

Edit: it is also likely helpful to try changing the initial lowpass filter resolution, to something like 10-12 Å instead of 20 Å, which may be too low for this small complex.

Please let me know if you have additional questions or clarifications on these suggestions.
Best,
Michael

1 Like

Hi @mmclean,
Thanks for the detailed advice.

I learnd from the tutorial that ‘The “Threshold” parameter should be set below the level value of the voxels within the imported mask.’. The exact numeber of my threshold for input here is very low as 0.012. And it looks as below after modified in Volume tool. If I set the threshhold to 1, nonthing could be seen.


Thanks!
Yahui

Hi @mmclean
I ran another local refinment job with a softer mask shown in my last reply. I also chose the static option for mask with non-uniformrefinemnt switched on and increased the rotation/shift search extents to 6. And the resoulution got better of 0.2A than the previous local refinement job. But still worse than the non-uniform refinement of P1/V1.