GSFSC is not reliable


After extensive 2D classification, I selected ~3.9M helical particles with box size 320 (pixel size 1.376 A) and performed several rounds of helical refinement (point group symmetry C8). Most of the 3D reconstructions yielded a GSFSC of ~3.4A. However, the maps look like 6A (photo attached for your reference - I’m showing 1 protofilament only of the whole 8-fold structure). So I exported such particles to relion, performed 3D classification and re-imported 1.5M particles (in cryosparc). Now, helical refinements yield GSFSC ~5A and the map got much worse.
Similarly, the reconstruction gets worse while refining ~1.2M particles after 3D classification in cryosparc.

Currently, I’m using the whole 3.9M particles stack as the map quality of the recostructiuons are promising. However, I am having a hard time to improve map quality. Do you have any tip or strategy I could try?


Hey Lorenzo,
with helical reconstructions too-close picking of individual boxes along filaments can artificially inflate the FSC. I could imagine that something similar happened with your ~400 Ang boxes. What’s your picking distance? Furthermore I’ve seen bad FSCs with high symmetry and combination of NU refinement & too-low res starting models. Can you provide an image of the FSC?
Cheers from Jülich,


Hello @mannda ,

Thank you answering. I have a filament diameter of about 100A and a separation distance between segments of 0.25 of the diameter. Hence, 25A interbox distance with a box of 440A (mask 360A). Do you think it’d be worth to re-pick with a different distance such as 44A or even 88A?
The rise of the helical structure is about 110A with no apparent twist so I’m just using a point group symmetry of C8 with no helical symmetry enforced.

Please, find GSFSC attached.

Screenshot 2023-03-28 at 2.44.27 PM

Hi @lorago92,

Thanks for the post. It does look like the GSFSC plot you showed is indicating signs of effective duplicate particles, which explains the inflated resolution estimates (3.4 Å reported when it may be closer to 6 Å). A few questions to help us determine why:

  • How did you perform particle picking?
  • Could you post a screenshot of the “inputs and parameters” tab of the helical refinement, with all of the inputs un-collapsed?
  • Could you also let me know the few lines printed to the event log, directly under the ====== Gold Standard Split ====== heading? It may say something like “Splitting input particles by filament uid”, for example


1 Like

Hello @mmclean,

Thank you for answering. I performed particle picking with Filament Tracer using 2D classes as templates (parameters attached). After extraction and extensive 2D classification, helical refinement run with the following parameters (second attachment). About the Gold Standard split:
====== Gold Standard Split ======
Force re-split for gold-standard split is enabled, so segments will be randomly split
into two halves.
Warning: particles will be randomly split into half-sets.
This may break half-map independence assumption if segments were extracted with
Split A has 2470942 particles
Split B has 2473491 particles

@lorago92 This line indicates the problem – this means that nearby adjacent picks are randomized into different gold-standard splits, which breaks the independence assumption. However, this shouldn’t happen if you used particles that were picked from the Filament Tracer, since the information required to split particles properly should be present in filament result.

At any point did you export and re-import particles? Or, was the “GS split without constraints” parameter of helical refinement activated?


Hello @mmclean,

The GS split without constrain was activated. However, I performed another helical refinement job without the constrain activated and the resolution is still not reflective of quality map (FSC attached). In this job I have:
====== Gold Standard Split ======
Force re-split for gold-standard split is enabled, so segments will be randomly split into two halves.
Splitting input particles by filament uid.
Split A has 2472217 particles
Split B has 2472216 particles



I re-picked my helical particles (Filament Tracer) with a bigger interbox distance (44A), extracted with a box size of 512pix (~360A) , downsampled to 320p and finally performed 2D classification. However, when 2D classification starts I have the following warning: “Cannot remove duplicate particles without particle locations and pick stats connected.”
Now, my 2D classification input comes from the downsampled particles and I do have locations and picks. What’s wrong here?

Screenshot 2023-03-30 at 5.15.42 PM

Hi @lorago92 ,

Thanks for re-running this – the FSCs are looking more accurate now, in the sense that you can tell that the unmasked (and spherically and loose masked) FSCs are reasonable for the resolution of the current model. The issue now is that the tight mask is too tight, and inflates the resolution estimate. You may find this previous forum post useful.

We’re aware that the FSC masking parameters (that control the “tightness” of the masks) could be better tuned for lower resolution datasets in the sub 6Å range, and are considering the best way which we can remedy this issue. Unfortunately, as of now these parameters are not accessible in most refinement jobs. The best workaround to this problem is as follows:

  • Launch a Volume Tools job to manually create a mask around your structure via thresholding and padding (recommended that the soft padding width follows our rule of thumb)
  • (If enforcing helical symmetry): Take your particles and volume from the Helical Refinement and optionally run symmetry expansion if the correct helical symmetry parameters are known
  • Take the expanded particles, generated mask from Volume Tools, and density map from Helical Refinement, and connect these to a Local Refinement job for further processing

Local refinement uses the input mask for all FSC estimation, which works-around the issue of overly tight FSC mask parameters.

I hope this helps,

1 Like