The 3dflex paper and documentation only say that the domain boundaries in a custom 3dflex mesh provide additional “inductive bias.”
Can you provide additional information on how the custom mesh is used? There is a restraint that flow on the mesh elements is similar within each domain?
In essence, yes, you’re right — similar meshes will flow in similar ways. Custom meshes are most useful when two domains are expected to either move away from or slide past each other.
For example, if we consider this dataset (EMPIAR 10261) in which we have two Fabs bound to the channel ECD:
it would be reasonable to expect that those Fabs independently flex forward/backward/left/right. However, the default mesh puts them both in the same grid of tetrahedral cells
Because we model flexible movement by deforming these cells, moving the Fabs away from each other would require expanding the central plane of tetras to a degree which, in other regions, would represent non-physical deformation of protein density and is thus not allowed.
Defining a custom mesh allows for each Fab to be placed in its own mesh:
I’m coming to this 1 year-old post because of a problem with a custom mesh. I’ve got a ~3 angstroms map that is not bad, now I’m trying to better resolve a small part of it, a moving tip of ~30 kDa which appears pretty blurred - but appears, so hope is there. Movement seems continuous back and forth, a bit sideways as well.
I’m used to running 3DFlex with the automatic mesh, and it does help a bit for this. Now I’m trying to improve even further by employing a custom mesh because some parts of the map move away from each other, not as much as the fabs in the example above, but they do move.
But the plots from the training job are too weird when I apply the custom mesh:
This only happens with the custom mesh. The job with the automatic mesh converges just fine.
My reconstruction’s FSC goes to 3.28 angstroms, and after downsampling for 3DFlex max resolution is 3.4 angstroms. Particle set is on the ~600 k ptcls, clean - I’d prefer to work with this big set to be able to explore more movements, if possible.
I’ve tried the custom mesh a few times already, changing contour level and yes or no for rigidity of a big block of the map that does not seem to move anyways. I know I’m doing something wrong, just can’t figure out what.
My current custom mesh has 8 fragments, would it help to have less fragments? This is what I’m about to try right now.
Looking forward to your suggestions.
Let’s see if we can get a custom mesh working for you!
Right off the bat, I notice that your latent coordinates are clustered very near zero. We zoom out that plot to +/- 1.5 because ideally your particles will be distributed across that range. You should try turning down the Latent centering strength parameter quite a bit – perhaps down to 2.0 to start with, and then maybe even lower, until your particles are more evenly spread in these plots. See below for an example:
Now for the problems with a custom mesh. As I mention in the guide page above, the most important things when working with a custom mesh are:
The topology of the map. That is, where are the “cuts” made through the overall mesh
The connectivity of these pieces
With that in mind, do you think you could share
One or more images of the segmentation used to create your mesh (so we can get a sense of the topology) and,
The connectivity between those segments (e.g., “The green segment is the root of the mesh. The yellow is fused to green”, etc.)
That will help us figure out if anything is wrong with how your mesh is being constructed!
Thanks a lot for your promptness. Yes, I noticed my training job with custom mesh was not producing the latents as wide as the previous job with the automatic mesh. Yesterday I tried two things to decrease the parametrization:
same mesh with 8 subgroups, but 12 tetracells instead of 24 (protein is ~20 nm in the longest dimension) - this still gave the same kind of plots.
a new mesh with only 3 subgroups and 12 tetracells - this worked as a charm, the movement is pretty similar to the automatic mesh, but the definition of the reconstruction is much better in the regions I was trying to improve. I am very happy about this now. Probably the previous jobs were giving too many parameters to 3DFlex?
I haven’t played with the latent centering strength yet, now it’s in my plans.
I am keen to share my data with you guys if you think it might improve things, just not here in the forum as this is hot stuff and we are treating this as confidential. Or maybe I can post here once it’s published, probably/hopefully in a few months?
I have another request, though: now that you guys probably have a lot of feedback using proteins of different sizes, can you please give us some orientation on:
the optimal resolution for 3DFlex (either assuming this is not limited by the data, or relating to the resolution of the data);
the optimal number or size of the tetracells (I found an old post by @olibclarke asking for the size to be set in angstroms, I agree with him as it would be easier to relate to the structures)
Obviously this will vary according to what we want to see, but maybe you already have some preferred settings?
I would definitely try adjusting the Latent centering strength whenever you see the particles occupying a small fraction of the space like that!
As far as optimal resolution, I don’t know that we’d say there’s a particular resolution we’d say is best. It is very important that the dataset is clean (that is, all images are images of the same thing). Generally this correlates with higher resolution, but not always. Also, the scale of motion you’re looking at makes a big difference too.
As far as the number of tetracells, I generally recommend setting them such that each tetra is about 5–6 Å across, but the ideal size for a tetrahedron will depend on the range of motion and the sample.
To give you some feedback: thanks a lot for these tips. I rewatched the tutorial videos yesterday, trying to pay more attention to the recommendations on parameters. I noticed your recommendation for the tetracell is different from what the tutorial recommends, which is to keep it at 20 x 20 (maybe the tuto needs an update?). I’ve been trying with your recommendation. In my case it means 50 instead of 20, but my server couldn’t cope with that, it went OK with 40, just very long training time (the first one took 15h, I’ve got a new one running and I’ll check tomorrow).
So far, for this dataset, there is something curious about the 3DFlex Reconstruction job, though:
Both flex and rigid maps from Reconstruction look very similar between them, and much worse than the individual frames from the Generate job - to me this is completely unexpected. It doesn’t seem to relate with the number of tetracells, but I haven’t ran all the jobs I want yet.
The two half maps from the Reconstruction job are much more detailed than the final map. Could it mean that I’m overfitting? What can I change to avoid it, if that’s the case?
When I tried to switch on “stiffen low density regions” in the mesh prep job, the rigidity weights plot went 100% red. So I re-ran with that option off. I wonder if this is a question of contour level of the original map? I’m running with min rigidity 0.2.
As for the number of tetracells, it is a bit empirical. In many cases keeping the default 20 tetra mesh base is sufficient for good results. It depends a lot on the sample, how much the box is cropped, types of motion, etc. As you noticed, increasing the number of tetra cells significantly increases the computational load.
For your other questions:
Questions 1 and 2
3D Flex Generate does not use half-sets, so the generated volumes may contain increased densities in some parts of the map due to spurious correlations. Similarly, you may see more of the particle in the half-maps, but that is likely because these half maps are not filtered at all.
If you do see some improvement in the 3D Flex map, you can try a local FSC job with a mask around the flexible region to see how much that part in particular has improved.
Another point here: the rigid map is a reconstruction of the particles using the same algorithm as the flexible map, but without applying the flex. If the two look very similar, that means the map probably hasn’t improved much as a result of flexible refinement.
Question 3
By default, the mesh segments are more rigid if there is more signal in them. This is to account for the fact that tetras which are mostly empty buffer should be easier to deform than tetras in the middle.
If you see issues with the default settings where the edges of the target expand in unrealistic ways, you can turn the “Stiffen low density regions setting” on. In essence, this uses a lowpass filtered map and makes less-dense regions stiffer rather than more flexible. In general, I leave this setting as the default (off).