I performed Ab initio for good 2D classification, but Ab initio reconstruction failed with the following message.
[CPU: 2.77 GB Avail: 214.60 GB]
Traceback (most recent call last):
File “cryosparc_master/cryosparc_compute/run.py”, line 95, in cryosparc_master.cryosparc_compute.run.main
File “cryosparc_master/cryosparc_compute/jobs/abinit/run.py”, line 285, in cryosparc_master.cryosparc_compute.jobs.abinit.run.run_homo_abinit
File “/spshared/apps/cryosparc4/cryosparc_worker/cryosparc_compute/noise_model.py”, line 119, in get_noise_estimate
assert n.all(n.isfinite(ret))
AssertionError
For this dataset, I can get two good reconstructions (≈ 50% particles) when I set 4 classes in Ab reconstruction. But when I set number of Ab initio classes with 6 or 8, it will failed with same message.
I wonder if this is caused by too many bad particles in select 2D job?
Does anyone konw how to fix this problem. Any help will be appreciated!
We have not been able to reproduce this internally using between 3 and 10 classes. Its perhaps due to a single class not having any particles assigned to it.
sorry for late reply. I have discovered that particles have a serious orientation problem, and this maybe the reason why ab-initio fails.
I am trying to overcome this issue, so I set one class in ab-intio and it worked. After that I run Nu-refine directly and eventully ran the orientation diagnostics job.
So I have another question what job should I run at the end of the orientation diagnostics (I don’t know how to relate 3DFSC volume to particle picking (to find missing views))?
@hai – that looks like a very severe orientation bias, maybe with only one major viewing direction (orthogonal to the yellow ring in the spherical plot). In general, it’s non-trivial to relate cFSC values to viewing directions. Each cFSC value is computed via a cone in Fourier space, while each particle represents a slice – see the Orientation Diagnostics guide for a diagram.
To address this, in CryoSPARC v4.5, we’ve updated Orientation Diagnostics to include a ‘relative signal’ heuristic which will directly relate poor directional FSCs to viewing directions with poor signal (either due to undersampling or due to bad/junk particles).
Internally, we’ve found that once you find the missing views, using a template picker job combined with a micrograph denoiser, and/or orientation rebalancing can be effective at improving orientation bias. This does, however, assume that you can get a reasonable structure via ab-initio and refinement with the biased particle set. Otherwise, just using a denoiser may help find more of the rarer views to get things going.
Hi, Thank you for your advice, and sorry for delay reply!
I am just starting to learn this software, so I am still inexperienced.
You have mentioned in here ’ once you find the missing views, using a template picker job combined with a micrograph denoiser, and/or orientation rebalancing can be effective at improving orientation bias’
Do you means I creat a new template pick job with pick on denised micrographs = true? (in fact I don’t understand which job should be run after the orientation diagnostics)
I have noticed that all particles in the micrographs have been well picked by blob pick, but I have found that there is a class of particles with highest number.
Do you means I creat a new template pick job with pick on denised micrographs = true? (in fact I don’t understand which job should be run after the orientation diagnostics)
Yep, that’s what I mean!
I have noticed that all particles in the micrographs have been well picked by blob pick, but I have found that there is a class of particles with highest number.
How do the classes look like? Are they largely the same structure with broken pieces, or junk volumes, etc?
like this and accompanied by an error message such as the following
‘Traceback (most recent call last): File “cryosparc_master/cryosparc_compute/run.py”, line 95, in cryosparc_master.cryosparc_compute.run.main File “cryosparc_master/cryosparc_compute/jobs/abinit/run.py”, line 285, in cryosparc_master.cryosparc_compute.jobs.abinit.run.run_homo_abinit File “/spshared/apps/cryosparc4/cryosparc_worker/cryosparc_compute/noise_model.py”, line 119, in get_noise_estimate assert n.all(n.isfinite(ret)) AssertionError’
I am still confused that blob pick seems to pick up all particles in the micrographs. In this case, is the temlate picking necessary at all?
In fact, in the new refine cycle, I intentionally discarded some of class which seems good in 2D job.
If blob picking is doing a good job, then re-picking with templates isn’t strictly necessary. Although it can be a good sanity check, it will largely depend on the dataset. e.g. with one dataset I’ve got running at the minute, blob picking did a pretty good job overall, but some particles were offset slightly. Using template picking from the “good” picks from the blob picking run, I’ve increased the number of well-centred particles by about 20%…