Thanks, @hsynder!
I played with the parameters a little and it’s completed trajectory and hyperparameter calculation, about a third of the way through particle processing now.
Thanks, @hsynder!
I played with the parameters a little and it’s completed trajectory and hyperparameter calculation, about a third of the way through particle processing now.
Hi, @hsnyder Any solutions? I have encountered a similar issue on CS v4.5.3.
:1: UserWarning: Kernel function slice_volume called with very small array <cryosparc_compute.gpu.gpuarray.GPUArray object at 0x7f0792544910> (size 1). Array will be passed to kernel as a pointer. Consider modifying the kernel to accept individual scalar arguments instead.
:1: UserWarning: Kernel function slice_volume called with very small array <cryosparc_compute.gpu.gpuarray.GPUArray object at 0x7f0792547d60> (size 1). Array will be passed to kernel as a pointer. Consider modifying the kernel to accept individual scalar arguments instead.
:1: UserWarning: Kernel function slice_volume called with very small array <cryosparc_compute.gpu.gpuarray.GPUArray object at 0x7f0792545390> (size 1). Array will be passed to kernel as a pointer. Consider modifying the kernel to accept individual scalar arguments instead.
DIE: [refmotion worker 6 (NVIDIA GeForce RTX 2080 Ti)] fatal error: Specified micrograph has less than two particles.
movie 12221426732231969034: J237/imported/012221426732231969034_FoilHole_20139791_Data_20155404_34_20240422_162116_Fractions.tiff
/net/flash/flash/cryosparc/cryosparc_worker/bin/cryosparcw: line 150: 60756 Illegal instruction (core dumped) python -c “import cryosparc_compute.run as run; run.run()” “$@”
Hi @qchen,
I think I just got lucky with the randomised subset of particles for parameterisation, to be honest. However, here’s some details of the successful run…
I went back to defaults for everything except setting Hyperparameter Search Thoroughness to “Extensive”, which I always use as I don’t find it significantly slower than “Fast” and the two dataset test runs I did when RBMC was made public (comparing all three modes) it gave a small improvement in resolution over “Fast” or “Balanced”.
This gave me some parameters which I was initially a little sceptical of (Spatial prior strength: 4.8880e-03, Spatial correlation distance: 3000, Acceleration prior strength: 4.8880e-03) since the spatial prior and acceleration prior were the same, but the FCC fit and dose weighting look OK and particle motion tracks look believable.
I did, however, still have a warning flash up when it was working on the particle step:
WARNING: [refmotion worker 2 (NVIDIA RTX A4000)] error (movie will be skipped): Specified micrograph has less than two particles
movie 1095300623923001321: J12/imported/001095300623923001321_FoilHole_28436071_Data_28423640_1_20240529_003121_EER.eer
So like I said, I think I just got lucky that it didn’t pull that micrograph for hyperparameter optimisation.
…
@hsynder,
I just checked the mic that caused the first RBMC run to fail… it’s the same micrograph.
DIE: [refmotion worker 3 (NVIDIA RTX A4000)] fatal error: Specified micrograph has less than two particles.
movie 1095300623923001321: J12/imported/001095300623923001321_FoilHole_28436071_Data_28423640_1_20240529_003121_EER.eer
So somehow this mic has snuck through the 2 particle cutoff, or has something else wrong with it.
However, the second failed RBMC run did not fail on that micrograph, it just died during hyperparameter optimisation iteration 6 with no particular error other than:
====== Job process terminated abnormally.
And the dmesg
output I reported previously.
I’ll ask my collaborators if it’s OK to share this micrograph with you if you’d like it (and a good mic?) for testing.
Hi @rbs_sci, sorry for the delay getting back to you on this. I have a theory regarding a possible cause… In 4.5 we introduced particle recentering and it’s also on by default. Are you using it? If so, one possible cause is particle recentering followed by rejection of particles too close to a micrograph edge. Those steps are done after the initial screening for micrographs with less than 2 particles. That ordering is definitely a bug, but I don’t know if it’s your bug. Are you using particle recentering? If so, can you isolate the problematic movie and see if running with recentering off fixes the issue?
@qchen tagging you in case the same workaround works for you as well.
Ah, that makes sense; I’ll check and update as appropriate!
@rbs_sci did turning off particle recentering fix it? I’m getting this same bug (we are on v.4.5.3 now)
It didn’t make any difference.
I’ve got a couple of other test runs for a related issue running on different datasets at the minute, although I will want to finish the full pipeline as a sanity check before I post further (and it’s a big dataset so isn’t going very quickly…)
Hi everyone, I just want to say that I encountered the same issue, and disabled the recentering option, and it worked (at least so far). So in some cases, this apparently is enough to fix it.
Hm. I got some very odd dose weighting plots.
Definitely not trusting that. Will try something else.
I don’t think the hyperparameter search converged correctly either; did “fast” as “extensive” was taking hours to progress just 1% of a single iteration. Poor dose weighting probably related to that.
Definitely odd looking plots, I always run with Fast thoroughness and have never seen anything like that though. Btw, what type of resolution improvement are you typically getting by switching from Fast to Extensive? I suppose it depends on the dataset, but I’ve gotten nice 0.2 Å resolution improvements from RBMC on multiple datasets using just Fast.
This dataset has been difficult in general because of what it is, but I wasn’t expecting miracles out of that test - although I wasn’t expecting that result either…
Improvement depends on the dataset. I’ve seen effectively zero improvement (second decimal place territory) and as much as 0.5 Ang on some data (where whole frame motion correction was used).
Normally there isn’t much difference between estimation modes; the hyperparameters end up more or less the same. Sometimes the convergence plots are a little jumpy with fast, and much smoother with extensive.
More tests to do.
Hi @rbs_sci I’m just chiming in to say that I am following this thread. I have no idea what happened with the FCC plot, but as you surmised, something is very wrong. It sounds like you’re on it, but if you can post a couple example particle plots, a description of the processing pipeline, and/or the hyperparam estimation plots, I may be able to advise.
That the bug I mentioned earlier, where particle recentering is done after we screen for micrographs with less than 2 partices, has been fixed in the patch just released today. Happy processing.
Using v.4.5.3, still the same error occured.
‘fatal error: Specified micrograph has less than two particles.’
The message was incurred at approximately 74% mark during the cross-validation score computation.
@JinsungKim24 can you post your full cryosparc version? It appears on the cryosparc home page next to the logo
I could not see full version as yours.
Ah, I think the issue is that you have the latest release version, but not the latest patch, which is where the fix was introduced. Have a look at the guide page about patches for instructions on how to install the patch.
– Harris
Hi Harris,
Thanks for the note.
As you suggested, I’ve asked our system admin to apply the patch in our cryosparc node.
I’ll try to post feedback after the patch.
Best,
Jinsung