3D-VA runs out of GPU memory when I use large particles (512px). As a consequence, I want to downsample my particles and rerun refinement, prior to 3D-VA.
I downsampled my particles (from a refinement job) from 512 to 256 using the downsample job type, then reran refinement, giving the weird FSC attached with resolution of 10 Å. The initial res prior to downsampling was 2.8 Å. Starting the refinement from an ab initio model generated from the downsampled particles gave the same result.
Any ideas as to what is going on here and how to fix it much appreciated!
Also, the 2D classes of the downsampled particles look blurry, like they have been low pass filtered. No lowpass filter was specified during downsampling, but is it possible that one is being applied anyway?
Hi,
Just wanted to add that I also observed the same thing after downsampling (V2.12.4). I did similar things in the past with previous cryosparc versions and never had any issues.
Itai
Hi @olibclarke@Itai thanks for reporting this - we have heard of it before sporadically but haven’t seen it or reproduced ourselves… can you think of anything special about the particles that are being downsampled? @olibclarke the box sizes you used seem totally normal… sometimes odd numbers or numbers that are not multiples of 2,3,5 & 7 can do strange things in FFT.
If either of you can share any data where this happens, that would be the fastest way we can resolve! Let us know and we will send upload credentials via email.
Hi @apunjani, happy to share data - there was nothing special about the particles, and the same particles re-extracted with the same level of fourier cropping give no issues, it is only the particles from the Downsample job that cause problems. I haven’t tested yet with any other particles to see if it is a general issue.
I have checked with another dataset with different box size (256 downsampling to 128), and in that case everything works as expected. Perhaps it is something with that particular pair of box sizes (512/256)? I can provide the original and downsampled particles sets and volumes, if you like.
Actually scratch that, I now have a case where this happens with particles not from Live, with same characteristic distortion of FSC (in this case downsampling from 400 to 330):
I observed a similar problem when downsampling my particle sets.
In retrospect, I realized that the particles came from two import jobs, were merged and processed all the way together.
In my case, it was resolved when I downsampled the particles individually from each import and used those for 3dva. (this was CS2.9)
HTH,
Niko
PS: Maybe this also explains the issue with the live-imported data? (Just a guess)
PPS: I could imagine that 2.11 can create similiar isuses with the exposure groups, maybe someone™ can check?
Just a weird question - if you run CTF refinement, does it go away? Depending on the downsample it may not be feasible. I wonder if there are some compounding pixel size errors.
As aside, on one data set we always got severe striations across the maps (looks kind of like the line artifact in 3DVA), but only at certain pixel sizes. Eventually we realized they were always spaced by multiples of the graphene oxide diffraction spots (and stopped using those pixel sizes).
I’ve run CTF refinement before downsampling, but not after - afterwards when there is this issue I doubt it will work, as the map is very poor. Also, the fact that re-extraction with Fourier binning works fine in cases where downsampling gives garbage makes me think that it is not really to do with properties of the data as such
Currently uploading one tarball of the job with downsampled particles to cryoem3 (downsample_data.tar.gz). Should be ~28G. After that will upload the tarball with the original particles, but that is ~0.5T so will take a bit longer.