I am currently working with inhibitors for an enzyme (about 500Da MW). We were able to collect data and processing it, we were able to get a map with a 3.9A resolution. The problem is that in the active site we do,'t see any type of density that corresponds to the inhibitor (previously incubated with the enzyme).
I suppose that it is most likely because not all the particles I selected have the inhibitor. For the processing, I classified them into 200 2D classes and reconstructed 1 class 3D map.
I wanted to ask you, how or what could I do to be able to analyze the active site? How could I differentiate particles that have the inhibitor from those that don’t? What could I use to get the density of the inhibitor?
If there is a correlated change in conformation or dynamics upon binding (and if your ligand is bound with an appreciable occupancy), you may be able to separate the apo and holo states by 3D classification.
If there is minimal conformational change on binding though, then this will be very challenging or impossible to solve due to the small size of the ligand.
3D classification can/will do it, but it’s always challenging for something small. I had excellent results with a ligand approximately 600 Da and identifying bound/unbound, but the map was <2 Ang. At ~4 Ang I’d be very cautious.
A friend has some amazing results showing binding of an ~240 Da ligand, but resolution is <2.5 Ang and structural changes are minimal, it just binds in a really beautiful spot which can’t be missed.
Also, as an aside, just because something is incubated with the relevant inhibitor, doesn’t mean it’s bound, especially if conditions have been adjusted to optimise grids (this was a massive headache with a project over the summer). Do you have biochemical evidence that it’s binding in conditions used to make grids?
If at all possible, see if you can get resolution to <3 Ang, then I’d be pretty hopeful.
@rbs_sci Thank you for your reply.
If we obtained a map at the resolution <3-2A, how were you able to identify the ligand? And how were you able to separate the particles bound to the ligand from those not bound to it?
Best, David
Identifying the ligand was easy; it was either there, or not there. For one dataset (my friends) it’s actual two datasets, but for mine, it was just 3D classification of the complex. Nothing else to really explain.
Hi @rbs_sci - sorry to dredge up an old thread, but I was wondering what classification parameters (in particular thinking about masking and per-particle scale refinement) that you used to identify such a small change (ligand binding)? In my hands using a too-small focus mask leads to artefacts, no matter how soft the mask is - did you use a mask for the whole domain?
Correct. I’ll be honest, I wasn’t entirely expecting it to work. Was pleasantly surprised. I think resolution helped a lot; tried something similar with a lower resolution map and it was not successful.
Based on my experience, 3.9A resolution is the borderline for telling the occupancy of a 500Da compound. How does your map around the pocket/active site look like? IMO if you have decent map quality for the residues of the binding pocket (e.g. clear contour of side chains) but no signal corresponding to the compound, then probably your compound is not there. On the other hand, if the map quality around the pocket is mediocre (e.g. showing only peptide backbone and some SS features), IMO probably we cannot draw a conclusion regarding compound occupancy.
Additionally, how large is your enzyme and where is the binding pocket located? My experience is that, if the binding pocket is in the “peripheral regions” of the protein, a non-uniform refinement would very often produce low-quality maps around the pocket, even if the overall resolution is decent. In this case, you may want to do a local refinement (probably with particle subtraction) around the pocket.
Could I ask you for some more details on the classification parameters you used? In particular:
since your map was <2 A, did you use a filter resolution around 2-3 A?
did it help to use more than 2 classes?
did you play around with the class similarity parameter?
How much mass did your mask include? I assume that classifying the presence of a ligand would require high filter resolutions (~3 A), but I’d imagine that the larger the mask, the more likely it is for the classification to pick up on 3 A movements elsewhere in the mask. Would you increase the number of classes to compensate? How many particles did you have to work with?
I’m also seeing the same thing as Oli where smaller masks around the ligand for 3D classification inevitably lead to artifacts, presumably because there isn’t enough signal inside the mask to compare against. I’ve also found that I get the best results if the solvent mask is the same as the focus mask around the ligand - if I use a mask around the entire complex as the solvent mask, I end up getting equally-sized classes, regardless of whether I turn on force hard classification (this thread seems relevant).
In my case, I’m dealing with a challenging dataset where a small-molecule ligand intercalates between two DNA bases and shifts the register of the downstream DNA by almost exactly 1 base pair. At low resolutions, it’s hard to tell whether the ligand is bound at all, since it occupies the same position as a nucleotide in the apo protein.
I currently ~2.7-2.8 A map, and it appears to be an average between the ligand bound and unbound states, (i.e. an average between the nucleotide and the ligand) but I’m having trouble classifying these states into two distinct classes. I’d appreciate any advice you or anyone might have on classifying small ligands.
Have you tried a highpass filter? I find in such cases (subtle changes) a highpass filter of 15-20Å can help, sometimes dramatically.
I would mask the entire region that changes, even subtly - so in your case, I would have a mask that includes the DNA that changes register (and perhaps the contacting domain).
For high filter resolutions, I would also significantly increase the batch size (up to 50-10000).
Thanks for the suggestions, Oli! The point about the highpass filter is interesting. Is the reasoning that excluding low-resolution information will help bias the classification towards high-resolution features? I’ll also try increasing the batch size.
Yes more or less - excluding low res info which may not be relevant to the conformational change you are looking for. Doesn’t always work but can sometimes make a dramatic difference
“All particles in” (about 700,000) focussed refinement was a 1.9Ang map. Focussed mask was on two monomers and fairly tight to protein, ligand sat between monomer protein density and was nicely inside mask. I used 8 classes, not sure if more or less would have had an impact. Bumped particles per class to 10,000 and increased Epochs. Class similarity was 0.001 (not sure lower than 0.01 has any effect). Filter resolution was 2.5Ang as that allows features of the ligand to be visible. As everyone else also states, too tight a mask gives artefacts. Forgive me if I’m not being super specific.
Nucleotides are a different beast entirely. I was tinkering with a different dataset hoping I could do the same sort of thing and couldn’t resolve different positions for them at all.
I’ve noticed hard classification collapses into fewer classes with still obvious heterogeneity within discrete classes sometimes - last week a 24 class 3D classification collapsed into 6 classes (+1 low occupancy) with hard classification enabled and 10,000 particles per class per batch - might be peculiar to that sample as it’s high symmetry (symmetry expanded) and I filtered it quite hard (8 Ang) as I was looking for something fairly obvious…?
An asymmetric sample currently being studied by a postdoc does really well with a mix of heterogeneous refinement and 3D classification to sort out different states, and hard classification vs. class sim ~0 appears to make no difference for that.
I tend to default to high particle-per-class (to prevent/slow class collapse) and low class similarity, then if that doesn’t work well try hard classification.
But most of my tests are likely now out of date since the 4.6 update where the team report that seeds are more reproducible. Haven’t tested that extensively yet.